WO2007026598A1 - Medical image processor and image processing method - Google Patents

Medical image processor and image processing method Download PDF

Info

Publication number
WO2007026598A1
WO2007026598A1 PCT/JP2006/316595 JP2006316595W WO2007026598A1 WO 2007026598 A1 WO2007026598 A1 WO 2007026598A1 JP 2006316595 W JP2006316595 W JP 2006316595W WO 2007026598 A1 WO2007026598 A1 WO 2007026598A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
head
image processing
processing apparatus
Prior art date
Application number
PCT/JP2006/316595
Other languages
French (fr)
Japanese (ja)
Inventor
Hiroshi Fujita
Yoshikazu Uchiyama
Toru Iwama
Hiromichi Ando
Hitoshi Futamura
Original Assignee
Gifu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gifu University filed Critical Gifu University
Priority to JP2007533204A priority Critical patent/JP4139869B2/en
Publication of WO2007026598A1 publication Critical patent/WO2007026598A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/504Clinical applications involving diagnosis of blood vessels, e.g. by angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present invention relates to a medical image processing apparatus and an image processing method for performing image analysis and image processing of a head image obtained by imaging a patient's head.
  • MRI Magnetic Resonance Imaging
  • Detection of an unruptured cerebral aneurysm by a doctor is performed using an MRA image (Magnetic Resonance Angiography) in which the blood flow in the blood vessel is imaged by MRI.
  • MRA image Magnetic Resonance Angiography
  • a 2D image is used by MIP processing (Maximum Intensity Projection). Because the unruptured cerebral aneurysm that occurs in the blood vessel is small Therefore, it is necessary to distinguish from the surrounding blood vessel images that are displayed in an overlapping manner, and the doctor's fatigue is severe. There is also the possibility of oversight due to this fatigue.
  • the blood vessel image with the larger depth information that is, the blood vessel part on the rear side in the line-of-sight direction.
  • a technique for displaying by reducing the luminance of the intersection is also disclosed (for example, see Patent Document 2). According to this method, it is possible to give a sense of perspective to the blood vessel image, and it becomes easy to observe the blood vessel site on the near side.
  • Patent Document 1 Japanese Patent Laid-Open No. 2002-112986
  • Patent Document 2 JP-A-5-277091
  • Non-patent document 1 Norio Hayashi et al., Head MRI image using morphological processing, Automatic extraction of cerebellum and brain affected area, Journal of Medical Image Information Society, vol21.nol.ppl09-115,200 4
  • Non-Patent Document 2 Ryuro Yokoyama et al., Automatic detection of lacunar infarct area in brain MR images, Journal of Japanese Society of Radiological Technology, 58 (3), 399-405, 2002
  • Patent Document 2 makes it easier to observe a blood vessel portion closer to the observer. Therefore, in the MIP image created according to the line-of-sight direction that the doctor wants to observe, if another blood vessel part intersects the front side of the blood vessel part that the doctor is interested in, it cannot be removed. It is not always possible to observe the desired blood vessel site in detail from the desired observation direction.
  • An object of the present invention is to detect a lesion with high accuracy in the head image force. Also specific It is possible to observe while paying attention to the blood vessel site.
  • the invention described in claim 1 is a medical image processing apparatus
  • reconstructing means for reproducing the original image only in a candidate region of a lesion part where the calculated vector concentration degree is a predetermined value;
  • Delete means for deleting a false positive candidate region that is a normal blood vessel from candidate regions of a lesion portion reproduced in the head image using
  • the invention according to claim 8 is the medical image processing apparatus according to claim 1, wherein:
  • Image control means for discriminating one or a plurality of blood vessel sites included in the extracted blood vessel image and attaching blood vessel site information relating to the discriminated blood vessel site to the head image.
  • the calculated feature value force can also exclude image elements other than the candidate region, and the feature value for the candidate region can be calculated accurately. Become. Therefore, it is possible to further improve the accuracy of the detection process itself.
  • the region in which the vector concentration degree is substantially calculated is the blood vessel region in which the lesion part of the aneurysm exists. This makes it possible to reduce the calculation time.
  • the blood vessel part information attached to the target image by referring to the blood vessel part information attached to the target image, one or a plurality of blood vessel portions included in the blood vessel image of the target image It is possible to easily determine the position. If the blood vessel part can be identified, for example, each blood vessel part is identified and displayed when the target image is displayed, and each blood vessel part included in the target image is specified when the target image is used, and the information is provided to the doctor. It becomes possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site, and can improve the interpretation efficiency.
  • the position and position of each blood vessel part included in the blood vessel image of the target image are referred to.
  • the name can be easily identified.
  • the position and name of each blood vessel part included in the target image can be specified when the target image is used, and the information can be provided to the doctor. Therefore, the doctor can easily grasp the position and name of a specific blood vessel site in the target image.
  • the target image has individual differences in the form of the blood vessel image depending on the subject (patient), but the blood vessel image is substantially reduced by affine transformation. By aligning them so that they match, the reference image and the blood vessel image of the target image can be associated with each other with high accuracy. Therefore, it is possible to distinguish a blood vessel site regardless of individual differences of subjects, and the versatility is high.
  • the doctor can easily identify each of one or a plurality of blood vessel portions included in the blood vessel image of the target image.
  • the doctor can easily grasp the names of one or a plurality of blood vessel parts included in the blood vessel image of the target image.
  • ⁇ 1] A diagram showing an internal configuration of the medical image processing apparatus according to the embodiment.
  • FIG. 3A is a diagram showing an example of an MRA image.
  • FIG. 3B is a diagram showing an example of an extracted image obtained by extracting a blood vessel region.
  • FIG. 4 is a diagram showing a vector concentration filter.
  • FIG. 5 A diagram showing a cerebral aneurysm model and a blood vessel region model.
  • FIG. 6 is a diagram showing an output image example of a vector concentration filter.
  • FIG. 7A is a diagram showing a filtered image before threshold processing.
  • FIG. 7B is a diagram showing a filtered image after threshold processing.
  • FIG. 8 is a diagram for explaining a method of calculating sphericity.
  • FIG. 9A is a diagram for explaining an identification method based on a rule-based method.
  • FIG. 9B is a diagram illustrating an identification method based on a rule-based method.
  • FIG. 10 is a diagram showing an output example of a detection result of a cerebral aneurysm candidate.
  • ⁇ 12A] is a diagram showing an example of a reference image.
  • FIG. 12B is a diagram showing an original image used to create a reference image.
  • FIG. 13A is a diagram showing a target image and a histogram of the target image before and after performing normalization processing.
  • FIG. 13B is a diagram showing a target image of a subject different from that in FIG. 13A and a histogram of the target image before and after performing normal image processing.
  • FIG. 14A is a diagram showing a target image.
  • FIG. 14B is a diagram showing a blood vessel extraction image obtained by extracting blood vessels from the target image shown in FIG. 14A.
  • FIG. 15A is a diagram showing a blood vessel extraction image and a reference image.
  • FIG. 15B is a diagram in which the blood vessel extraction image and the reference image shown in FIG. 15A are superimposed.
  • FIG. 16A is a diagram showing landmarks in a reference image.
  • FIG. 16B is a diagram showing corresponding points in a blood vessel extraction image.
  • FIG. 17A is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
  • FIG. 17B is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
  • FIG. 18 is a diagram showing an example of identification display of each blood vessel part discriminated in the target image.
  • FIG. 19 is a flowchart showing a detection process according to the second embodiment.
  • FIG. 20 is a diagram showing an analysis bank of a GC filter bank.
  • FIG. 21 is a diagram showing a filter bank A (z j ).
  • FIG. 22 is a diagram showing a reconfiguration bank of a GC filter bank.
  • FIG. 2 is a diagram showing a filter bank S (z j ).
  • FIG. 1 shows the configuration of the medical image processing apparatus 10 in the present embodiment.
  • the medical image processing apparatus 10 detects a candidate region of the medical image force lesion by performing image analysis on a medical image obtained by examination imaging.
  • this medical image processing device 10 is attached to an image generation device that generates medical images and a medical doctor that stores and manages medical images. It is good also as providing in a medical image system to which various apparatuses, such as an image interpretation terminal which obtains and displays on a display means, are connected via the network.
  • various apparatuses such as an image interpretation terminal which obtains and displays on a display means, are connected via the network.
  • an example in which the present invention is realized by a single medical image processing apparatus 10 will be described.
  • the functions of the medical image processing apparatus 10 are distributed to each component of the medical image system, and the entire medical image system is used. As a realization of the present invention.
  • the medical image processing apparatus 10 includes a control unit 11, an operation unit 12, a display unit 13, a communication unit 14, a storage unit 15, and a lesion candidate detection unit 16.
  • the control unit 11 includes a CPU (Central Processing Unit) and a RAM (Random Access Memory).
  • CPU Central Processing Unit
  • RAM Random Access Memory
  • Etc. which reads various control programs stored in the storage unit 15 and performs various calculations, and comprehensively controls processing operations in each unit 12-16
  • the operation unit 12 includes a keyboard, a mouse, and the like. When these are operated by an operator, an operation signal corresponding to the operation is generated and output to the control unit 11. Note that a touch panel configured integrally with the display in the display unit 13 may be provided.
  • the display unit 13 includes display means such as an LCD (Liquid Crystal Display), and various operation screens, medical images, and medical image forces are detected on the display means in response to instructions from the control unit 11.
  • display means such as an LCD (Liquid Crystal Display)
  • various operation screens, medical images, and medical image forces are detected on the display means in response to instructions from the control unit 11.
  • Various display information such as the detection result of the lesion candidate and its detection information is displayed.
  • the communication unit 14 includes a communication interface, and transmits and receives information to and from an external device on the network. For example, the communication unit 14 performs a communication operation such as receiving a medical image generated from the image generation device and transmitting detection information of a lesion candidate in the medical image processing device 10 to an interpretation terminal.
  • the storage unit 15 is a control program used in the control unit 11, various processing programs such as detection processing used in the lesion candidate detection unit 16, and data such as parameters necessary for the execution of each program and processing results thereof. Is remembered.
  • the storage unit 15 stores medical images that are candidates for detection of lesion candidates, information on detection results, and the like.
  • the lesion candidate detection unit 16 cooperates with the processing program stored in the storage unit 15.
  • the image to be processed is subjected to various image processing (gradation conversion processing, sharpness adjustment processing, dynamic range compression processing, etc.) as necessary. Further, the lesion candidate detection unit 16 executes detection processing and outputs the detection result. The contents of the detection process will be described later.
  • an example of detecting a lesion candidate for an unruptured cerebral aneurysm from an MRA image (three-dimensional image) obtained by imaging the patient's head using MRI and imaging the blood flow in the brain is explained.
  • the A cerebral aneurysm is a bulge (expansion) that forms in the wall of an artery, and is caused by the blood pressure exerted on the artery wall. If a thrombus occurs inside the cerebral aneurysm or this cerebral aneurysm ruptures, serious diseases such as subarachnoid hemorrhage may develop.
  • FIG. 2 is a flowchart for explaining the flow of detection processing. As described above, this detection process is a process executed when the lesion candidate detection unit 16 reads a detection processing program stored in the storage unit 15.
  • MRA 3D image data is first input (step Sl). Specifically, the MRA image to be processed stored in the storage unit 15 is read by the lesion candidate detection unit 16.
  • MRI is a method for obtaining an image using nuclear magnetic resonance (hereinafter referred to as NMR) in a magnetic field.
  • NMR nuclear magnetic resonance
  • an object In NMR, an object is placed in a static magnetic field, and then an RF pulse (radio wave) at the resonance frequency of the atomic nucleus to be detected in the object is irradiated.
  • the resonance frequency of the hydrogen atom that constitutes the water that is abundant in the human body is usually used.
  • the object When the object is irradiated with RF noise, an excitation phenomenon occurs, the nuclear spins of the atoms that resonate with the resonance frequency are aligned, and the nuclear spins absorb the energy of the RF pulse.
  • the irradiation of the RF pulse is stopped in this excited state, a relaxation phenomenon occurs and the phase of the nuclear spin becomes nonuniform, and the nuclear spin releases energy.
  • the phase relaxation time constant is T2
  • the energy relaxation time constant is T1.
  • MRI magnetic resonance imaging
  • T1-weighted image is used to detect anatomical structures
  • ⁇ 2-weighted image is used to detect lesions.
  • images taken by the FLAIR method are ⁇ 2 weighted images in which signals from water are attenuated, but are particularly called FLAIR images.
  • MRA is a blood vessel imaging method in MRI! In MRI, the direction of the subject's foot to head
  • MR A is a method of imaging a blood vessel with blood flow by imaging this high signal.
  • FIG. 3A shows an example of MRA image.
  • the blood vessel region with blood flow has a high signal, so the blood vessel region appears white in the MRA image.
  • the 3D image data is preprocessed as a preparation stage for detecting candidates (step S2).
  • preprocessing normalization processing and gradation conversion processing of image data are performed. Normalization is performed by converting by linear interpolation so that all the edges that make up the botacell become 3D image data of the same size.
  • the density gradation conversion process is performed on the three-dimensional image data converted into the equal-sized button cells, and the signal value of each button cell is linearly converted into a density gradation of 0 to 1024. At this time, the higher the signal value, the closer to the density value 1024, and the lower the signal value, the closer the density value to 0.
  • the density gradation range is not limited to 0 to 24 and can be set as appropriate.
  • a blood vessel image region is extracted from the three-dimensional MRA image (step S3).
  • threshold processing is performed, and the MRA image is binarized.
  • the blood vessel region is whitened and the other regions appear black.
  • the blood vessel region has a different value from the other regions. Therefore, the blood vessel region is extracted by the region expansion method.
  • the starting botasel (the whitest Determine the density value (botacel), and examine the 26-botel in the vicinity of the determined botasel in the 3D MRA image before binarization, and determine certain judgment conditions (for example, the density value is 500 or more) Neighboring bocellels satisfying the condition are determined as blood vessel regions. Then, the same processing as described above is repeated for the neighboring buttonacell determined to be the blood vessel region. In this way, the blood vessel region can be extracted by sequentially extracting the botacell satisfying the determination condition while expanding the region.
  • Fig. 3B shows the MRA image force extracted from Fig. 3A. The extracted blood vessel region is white (concentration value 1024) and the other regions are black (concentration value 0).
  • the extracted 3D MRA image of the blood vessel region is subjected to filter processing using a vector concentration filter as shown in Fig. 4, and the cerebral aneurysm is output from the processed image output by the filter processing.
  • a primary candidate area is detected (step S4).
  • the vector concentration degree filter calculates the vector concentration degree in each botacell, and images and outputs the calculated vector concentration value as the botacell value of the botacell.
  • the vector concentration degree focuses on the direction of the gradient vector of density change and evaluates how much the gradient vector in the neighboring area is concentrated at a certain point of interest.
  • FIG. 5 shows a cerebral aneurysm model and a blood vessel model.
  • the extracted blood vessel region is within the range of a sphere having a radius R centered on the attention button cell P. If it exists, the vector concentration is calculated.
  • the vector concentration is calculated by the following formula 1.
  • the angle ⁇ indicates the angle between the direction vector from the target button cell ⁇ to the peripheral button cell Qj and the direction of the vector in the peripheral voxel Qj
  • M indicates the number of peripheral button cells Qj to be operated.
  • a filtered image as shown in Fig. 6 is output with the vector concentration degree as the botacell value. Since the vector concentration is output in the range of 0–1, in Fig. 6, the higher the vector concentration (closer to 1), the more white the image will appear! /
  • FIG. 7A is a diagram showing a part of the filtered image shown in FIG.
  • a binary key image as shown in FIG. 7B is obtained.
  • the binary image appears in white! /, That is, the vector concentration degree is large! /, And the image area is output as the primary candidate area.
  • the threshold for binarization is determined using a teacher image in which the presence of a cerebral aneurysm is known in advance.
  • a threshold value is obtained so that only the image area of the cerebral aneurysm whose existence has already been identified can be extracted. It is good also as obtaining by analyzing statistically.
  • a density histogram is obtained, and a density value at a certain area ratio p% in the density histogram is obtained as a threshold value.
  • the density histogram of the filtered image is obtained, and the density value at the maximum density value side force area ratio p% is determined as a threshold value.
  • a feature amount indicating the feature of the primary candidate region is calculated (step S5). Since a cerebral aneurysm has a certain size and a spherical shape, in this embodiment, the size of the candidate region, the sphericity, and the average value of the vector concentration in each botacell in the region are used as feature quantities. It will be calculated. However, if the cerebral aneurysm can be characterized, there is no particular limitation on what kind of feature is used. The maximum value of the vector concentration may be calculated, or the standard deviation of the density value of each botasel may be calculated.
  • the volume of each of the botacels constituting the candidate area is calculated.
  • the number of botasels that are not the actual volume is calculated and used as the index value indicating the volume in the subsequent calculations.
  • the sphericity feature amount is obtained when a sphere having the same volume as the primary candidate area is arranged so that the centroid of the primary candidate area and the centroid of the sphere coincide with each other. Is obtained from the ratio of the volume of the primary candidate region portion that coincides with the total volume of the primary candidate region.
  • step S6 secondary detection is performed based on the feature amount.
  • a cerebral aneurysm is distinguished from a blood vessel that is a normal tissue.
  • Discrimination is a force that shows an example of a discriminator using the rule-based method. The method is not limited to this, and any method can be used as long as discrimination is possible, for example, artificial-eural network, support vector machine, discriminant analysis.
  • a profile indicating the relationship between the sphericity with respect to the size and the average of the vector concentration with respect to the size is created.
  • the range to be detected as the secondary candidate (the range surrounded by the solid line in FIGS. 9A and 9B) is determined in advance, and the above profile file is used as the variable data for each feature quantity of the primary candidate to be identified.
  • it is created, it is identified as a true positive candidate if the variable data of each feature quantity of the primary candidate is within the detection range, and a false positive candidate if it does not exist. In other words, only the primary candidate in which the variable data of the feature quantity is distributed within the detection range is secondarily detected.
  • the detection range is determined using a teacher image that is previously known to be a cerebral aneurysm or a normal blood vessel.
  • the average of size, sphericity, and vector concentration is obtained from cerebral aneurysm and normal blood vessel teacher images, and the sphericity and size of the vector with respect to size are calculated from these features.
  • a profile of the average value relationship is created.
  • the variable data indicated by the ⁇ marker is true positive, that is, teacher data that is known as a cerebral aneurysm, and the variable data indicated by the ⁇ marker is false positive, that is, it is known as a blood vessel. Teacher data.
  • the range surrounded by four dotted lines indicating the threshold is the detection range, and “size sphericity”, “ The detection range is determined for each of “average vector density”.
  • the primary candidate the variable data is indicated by a ⁇ marker located within the detection range is secondarily detected as a cerebral aneurysm candidate.
  • tertiary detection is further performed on the secondary detection candidates (step S7).
  • discriminant analysis is performed using three feature quantities.
  • any of Mahalanobis distance, principal component analysis, linear discriminant function, etc. can be applied, but a method different from the method at the time of secondary detection is applied.
  • the detection result is output assuming that the candidate region that has been thirdarily detected is the final cerebral aneurysm candidate (step S8).
  • FIG. 10 shows an example of a detection result displayed on the display unit 13 as a detection result.
  • marker information (marked by an arrow in FIG. 10) indicating a candidate region of the third detected cerebral aneurysm in the MIP image created from the 3D MRA image Is displayed.
  • a MIP image is a 2D image created by applying MIP processing to 3D MRA image data, and the structure in the image can be displayed in 3D.
  • MIP processing is called the maximum brightness projection method, and projection is performed with parallel rays from a certain direction, and the maximum brightness (signal value) in the button cell is reflected on the projection surface to enable 3D observation. This is a process for creating an image.
  • Information regarding detection of cerebral aneurysm candidates may be output and used as reference information at the time of diagnosis by a doctor.
  • the color of the marker information may be changed according to the degree of vector concentration. For example, change the color of the arrow marker to red if the vector concentration is 0.8 or higher, yellow if it is 0.7 to 0.8, blue if it is 0.7 to 0.5, etc.
  • the vector concentration The doctor can easily grasp visually that the degree of the spherical shape of the aneurysm is strong.
  • FIG. 10 shows an example in which the position of a cerebral aneurysm candidate can be identified by marker information.
  • the cerebral aneurysm candidate region is displayed so as to be distinguishable from other regions, such a case is displayed.
  • the volume rendering method is a method of performing three-dimensional display by giving color information and opacity to each botacell for each partial area. For the attention area, the opacity is set high, and other than that By setting the area low, the area of interest can be raised. Therefore, at the time of display, opacity is set for each area, and color information setting processing corresponding to the opacity is performed.
  • the detection result may be shown using a filter processing image (see FIG. 6) obtained by a vector concentration filter that is not used for MIP images or the like.
  • a filter processing image obtained by a vector concentration filter that is not used for MIP images or the like.
  • the vector concentration degree is low
  • the value area is colored blue
  • the high value area is colored red, etc., so that the doctor can visually grasp the calculated vector concentration degree. You may do it.
  • a filtered image in which the color of the blood vessel region is changed according to the vector concentration degree may be displayed superimposed on the corresponding position of the MIP image. In this way, it is possible to provide vector concentration information as reference information for detecting a cerebral aneurysm by a doctor.
  • This blood vessel part discrimination process is a software process realized in cooperation with the control program 11 and the blood vessel part discrimination process processing program stored in the storage unit 15.
  • the blood vessel part discrimination process one or a plurality of blood vessel parts included in the blood vessel image are discriminated from the blood vessel image appearing on the 3D MRA image obtained by photographing the head.
  • MRA is a kind of MRI blood vessel imaging method.
  • energy can be absorbed only in a specific slice (fault) by applying a gradient magnetic field in the direction from the foot of the subject to the head (this direction is called the body axis).
  • the blood vessels in the slice are saturated with RF pulses. Since blood vessels always flow, the signal intensity in the slice increases when non-saturated blood flows over time.
  • MRA is a method for imaging blood vessels with blood flow by imaging this high signal.
  • the reference image is a blood vessel image on the three-dimensional MRA image in which the positions and names of one or a plurality of blood vessel parts are preset.
  • the vascular part refers to the anatomical classification of blood vessels
  • the position of the vascular part refers to the position of the botacel belonging to the vascular part.
  • FIG. 12A eight blood vessel sites included in the blood vessel image (anterior cerebral artery, right middle cerebral artery, left middle cerebral artery, right internal carotid artery, left internal carotid artery, right posterior cerebral artery, left posterior cerebral artery) , The position of the basilar artery) and the name of the blood vessel part are shown.
  • the names of the three vascular sites (right middle cerebral artery, anterior cerebral artery, and basilar artery) are shown, but all eight vascular sites are shown. The name is set.
  • the reference image g2 is generated as a three-dimensional data force of the head MRA image gl selected for the reference image as shown in FIG. 12B.
  • an axial image (a two-dimensional tomographic image obtained by cutting out a botasel in a plane perpendicular to the body axis) is created at intervals of this three-dimensional data force.
  • the botacell belonging to each vascular site is designated by manual operation, and the name of the vascular site is further designated.
  • landmark botasels are set in the reference image g2 at characteristic points such as the inflection point of the blood vessel, the end point, and the intersection of the blood vessel portions.
  • the landmark is used for alignment between the target image and the reference image, and will be described in detail later.
  • Landmarks are also set according to manual operations based on doctors' indications.
  • the reference image g2 may be created by the control unit 11 of the medical image processing apparatus 10, or an externally created image may be stored in the storage unit 15.
  • each blood vessel part is identified and displayed in FIG. 12A, but the actual reference image g2 has a black background (low signal value) and a white blood vessel image (high signal value). ) And binarized image.
  • the position information, name information, and landmarks of the botasels belonging to each blood vessel site The position information of the button cell that is a key is attached to the reference image or stored in the storage unit 15 as a separate file in association with the reference image.
  • a normalization process is first performed by the control unit 11 on the three-dimensional MRA image (hereinafter referred to as the target image and V ⁇ ⁇ ) to be determined (step S11).
  • the botacell may be a rectangular parallelepiped, or the maximum and minimum values of the voxel may vary. It occurs. Therefore, normalization processing is performed to unify the preconditions regarding the target image.
  • the target image is converted by the linear interpolation method so that all the sides constituting the botacell have the same size.
  • a histogram is created for all the botacell values of the target image, and all the botacel values of the target image are 0, with the top 5% or more of the histograms having a value of 1024 for the top 5% or more and 0 for the minimum. Linear conversion to 1024 tones.
  • the density gradation range is not limited to 0 to 1024 and can be set as appropriate.
  • FIG. 13A and FIG. 13B show an example of normalization processing.
  • the target image g3 shown in FIG. 13A and the target image g4 shown in FIG. 13B are obtained by using different patients as subjects. For this reason, the histogram hi (see Fig. 13A) obtained from the target image g3 and the histogram h3 (see Fig. 13B) obtained from the target image g4 share the common feature that there are two local maxima. It can be seen that there is a considerable difference in the range of values, and the histogram characteristics as a whole are different. If the target images g3 and g4 having such a histogram characteristic are subjected to the above-mentioned regularity processing and then created again, the histogram h2 shown in FIG. 13A and the histogram h4 shown in FIG. 13B are obtained. . Histogram h2 and h4 force As shown by the component force, the histogram characteristics of the target images g3 and g4 are almost the same by the normalization process.
  • the control unit 11 extracts a blood vessel image from the normalized target image (step S12).
  • threshold processing is performed on the target image, and binarization is performed.
  • the blood vessel image appears white and other tissue parts appear black. Therefore, in the binary image, the blood vessel image has a different value from the other regions. Therefore, the region having the same signal value as the blood vessel image is extracted by the region expansion method.
  • a binary cell is used to determine the starting botasel (the whitest and highest density voxel), and is determined to be the starting point in the target image before the binary image processing.
  • the starting botasel the whitest and highest density voxel
  • 26 Botacels are examined, and a neighboring Botacell that satisfies a certain determination condition (for example, a density value of 500 or more) is determined as a blood vessel image.
  • a certain determination condition for example, a density value of 500 or more
  • FIG. 14B shows a blood vessel extraction image g6 obtained by extracting blood vessel images.
  • the blood vessel extraction image g6 is obtained by extracting a blood vessel image from the target image g5 after normal input shown in FIG. 14A, and the region of the blood vessel image is white (density value 1024) and the other regions are black (density value 0). It is a valuation.
  • step S 13 In the control unit 11, in order to make the position of the blood vessel image of the blood vessel extraction image substantially coincide with the position of the blood vessel image of the reference image, alignment is performed based on the position of the center of gravity of each image (step S 13).
  • the position of the center of gravity is the position of the button cell that is the center of gravity of all the button cells belonging to the blood vessel image.
  • FIGS. 15A and 15B Specific description will be given with reference to FIGS. 15A and 15B.
  • FIG. 15A is a diagram in which the blood vessel extraction image and the reference image before alignment are superimposed. From FIG. 15A, it can be seen that the positions of the respective blood vessel images coincide with each other simply by combining the blood vessel extraction image and the reference image.
  • the control unit 11 obtains the positions of the centroid P (xl, yl, zl) of the blood vessel extraction image and the centroid Q (x2, y2, y3) of the reference image as shown in FIG. 15A.
  • the blood vessel extraction image or the reference image is translated so that the barycentric positions P and Q match.
  • FIG. 15B is a diagram showing the resultant force obtained by matching the center of gravity positions P and Q by translation.
  • Fig. 15B Force The fact that the blood vessel image in the blood vessel extraction image and the blood vessel image in the reference image are roughly coincident with each other is divided. [0072] Further, in order to perform alignment with high accuracy, the control unit 11 performs rigid deformation on the blood vessel extraction image (step S14).
  • a corresponding point search using a cross-correlation coefficient is performed as preprocessing for rigid body deformation. This is because a plurality of corresponding points are set for each of the two images to be aligned, and one of the images is rigidly deformed so that the corresponding points set in the two images match each other.
  • a land mark button that is determined in advance in the reference image and a blood vessel extracted image that has locally similar image characteristics are set as corresponding points. The similarity of image characteristics is determined based on the cross-correlation coefficient obtained for the blood vessel extraction image and the reference image.
  • corresponding points corresponding to 12 landmarks set in advance in the blood vessel image of the reference image g7 are searched from the blood vessel extraction image.
  • the start point is the button cell at the position corresponding to each landmark of the reference image g7, and the start point in the blood vessel extraction image g8 and the reference image g7.
  • a search is made for a button cell in the range of 10 to +10 (in a cubic region of 21 X 21 X 21) in the X-axis, Y-axis, and Z-axis directions and the landmarks.
  • Correlation coefficient C (hereinafter referred to as correlation value C) is calculated.
  • Equation 2 A (i, j, k) represents the botacell position of the reference image g7, and B (i, j, k) represents the botacell position of the blood vessel extraction image g8.
  • 8 is the average value of the voxel values in the search region in the reference image g7 and the blood vessel extraction image g8, and is represented by the following expressions 3 and 4.
  • ⁇ and ⁇ are reference images, respectively.
  • the correlation value C has a value range of 1.0 to 1.0, and the closer to the maximum value 1.0, the more similar the image characteristics of the reference image g7 and the blood vessel extraction image g8! / Show.
  • the position force of the botel cell having the largest correlation value C is set as the corresponding point of the blood vessel extraction image g8 corresponding to the landmark of the reference image g7.
  • the control unit 11 performs a rigid body deformation on the blood vessel extraction image g8 based on the corresponding point, and thereby the blood vessel image of the blood vessel extraction image g8 and the blood vessel image of the reference image g7. And are aligned.
  • Rigid body deformation is one of the affine transformations, in which coordinate transformation is performed by rotation and translation.
  • the alignment is performed so that the corresponding point of the blood vessel extraction image g8 matches the landmark of the reference image g7 by an ICP (Iterative Closest Point) algorithm that repeats rigid body deformation using the least squares method multiple times.
  • the square of the Euclidean distance from a certain target vessel cell in the blood vessel extraction image is obtained for all the vessel cells belonging to each blood vessel part of the reference image (this is called the target button cell). Then, it is determined that the blood vessel part to which the target botacell having the shortest Euclidean distance belongs is the blood vessel part to which the target botacell belongs. At this time, the name of the blood vessel part of the target button cell is determined from the name of the blood vessel part set in the target button cell.
  • Blood vessel part information indicating the position of the botacel to which it belongs and the name of the blood vessel part are generated by the control unit 11 and attached to the target image (step S16). For example, if it is determined that the button cell at the position (x3, y3, z3) is a blood vessel part of the anterior cerebral artery, the button cell at the position “(x3, y3, z3)” has the blood vessel name “anterior cerebral artery”. The blood vessel part information indicating that this is a blood vessel part is appended to the header area of the target image.
  • FIG. 17A and FIG. 17B show the results of determining the blood vessel site.
  • the blood vessel extraction image g9 shown in FIG. 17A and the blood vessel extraction image gl l shown in FIG. 17B are images obtained from different subjects, respectively, and the image glO shown in FIG. 17A and the image gl 2 shown in FIG.
  • Each blood vessel part is discriminated from the blood vessel extraction images g9 and gl l, and is an image that is identified and displayed by changing the color for each blood vessel part.
  • the blood vessel images of the images glO and gl2 are the same regardless of their different forms (blood vessel position, size, extension direction, etc.). It can be seen that the blood vessel site can be identified.
  • the above is a flow from determining a blood vessel part in the target image to attaching the blood vessel part information.
  • the control unit 11 When a display instruction operation is performed on such a target image via the operation unit 12, the control unit 11 performs MIP processing on the target image to generate a MIP image and displays it on the display unit 13.
  • MIP display the display of MIP images is referred to as MIP display.
  • a certain directional force is projected by parallel rays and is on this projection line.
  • This is a method of creating a two-dimensional image by reflecting the maximum luminance (botacel value) in the botacel on the projection plane.
  • This projection direction is the line-of-sight direction that the doctor desires to observe.
  • the doctor can freely operate the observation direction, and the control unit 11 creates a MIP image from the target image according to the observation direction instructed through the operation unit 12, and displays the MIP image. It shall be displayed in part 13.
  • control unit 11 In the state where the target image is displayed in MIP, when the instruction operation force S for identifying and displaying the vascular part is further performed, the control unit 11 refers to the vascular part information attached to the target image, and Based on this, display control is performed so that each blood vessel part can be identified in the blood vessel image in the MIP image (step S17).
  • the control unit 11 displays blue for the voxels belonging to the blood vessel part of the anterior cerebral artery in the MIP image.
  • the color of each blood vessel part is set to the button cell located at the position determined for each blood vessel part, such as green for the botacel belonging to the blood vessel part of the basilar artery. Then, the set color is reflected in the MIP image of the target image.
  • an annotation image indicating the name of the blood vessel part is created and synthesized with the corresponding blood vessel part of the MIP image.
  • FIG. 18 shows an example of identification display.
  • the MIP image gl3 is the target image displayed with the head upward force MIP.
  • the identification display image g 14 is displayed.
  • the identification display image gl4 is obtained by identifying and displaying each blood vessel part by, for example, assigning different colors to the eight kinds of blood vessel parts in the blood vessel extraction image.
  • the identification display image gl4 when a doctor selects a blood vessel part, an annotation indicating the name of the blood vessel part such as “basal artery” is displayed in association with the selected blood vessel part by the display control of the control unit 11. Is displayed.
  • the MIP image gl5 corresponding to the side direction is created by the control unit 11 and displayed.
  • the blood vessel image on the front side overlaps the blood vessel image on the rear side, making it difficult to observe.
  • Even in such MIP image gl5 it is possible to identify and display blood vessel sites. Noh.
  • the blood vessel part information it is possible to determine which botacell corresponds to which blood vessel part, and therefore, it is possible to identify the botacell to be identified and displayed regardless of the change in the observation direction of the MIP display. is there.
  • the identification display image corresponding to the MIP image g15 is the image g16.
  • this identification display image g 16 it is possible to extract and display only one of the blood vessel sites.
  • the MIP image in which only the luminance of the selected vessel vessel is projected that is, the MIP image of the target image.
  • a blood vessel selection image gl7 in which only the blood vessel site selected from gl5 is extracted is displayed.
  • the blood vessel selection image gl7 only the selected blood vessel part is displayed in MIP and the other blood vessel parts are not displayed, so that the doctor can observe only the blood vessel part of interest.
  • these display images g 13 ⁇ Gl7 also by stone as be displayed side by side on the same screen, it is also possible to switch the display as one screen first image.
  • the MIP display image gl3, the identification display image gl4, and the blood vessel selection image gl7 can be compared and observed.
  • each image gl3 to gl7 can be observed in full screen display. This makes it easier to observe details.
  • Three-dimensional MRA image data for 20 patients were obtained. These image data have a matrix size force S256 X 256, a spatial resolution of 0.625 to 0.78 mm, and a slice thickness of 0.5 to 1.2 mm. These image data are divided into seven unruptured cerebral aneurysms. An unruptured cerebral aneurysm has been determined by an experienced neurosurgeon.
  • these three-dimensional MRA image data were converted to image data having an equal botacell size using a linear interpolation method, and normalization was performed. As a result of this normalization, all image data became equal-botacel image data with a matrix size power of S400 X 400 X 200 and a spatial resolution of 0.5 mm.
  • the size (volume), sphericity, and velocity of the cerebral aneurysm region (true positive) determined by the doctor are determined.
  • Each feature amount of the degree of concentration of tuttle is calculated, and the same feature amount is calculated for a normal blood vessel region (false positive).
  • These feature quantities were used to determine the detection range of the rule-based method, which is a discriminator for secondary detection, and as teacher data for discriminant analysis. Similarly, the discriminator was trained as a teacher image in the discriminator at the third detection.
  • a cerebral aneurysm candidate having a characteristic that a gradient vector concentrates in the central portion is accurately detected using a vector concentration filter, and the detected information is obtained by a doctor. Can be provided. Therefore, it is possible to prevent fatigue and oversight during doctor's interpretation work, and it is expected to improve diagnosis accuracy.
  • the vector concentration filter is applied not to the MRA image itself but to the extracted image obtained by extracting the blood vessel region, it is possible to shorten the processing time required for the filter processing.
  • a cerebral aneurysm candidate is detected using a 3D MRA image. Detection may be performed using an MRA image.
  • the size of the cerebral aneurysm candidate is the number of pixels, and the sphericity is a circularity, and a two-dimensional feature value is calculated.
  • an MRI image obtained by other imaging methods such as a contrast MRA image obtained by imaging a blood vessel region using a contrast agent may be used.
  • a contrast MRA image obtained by imaging a blood vessel region using a contrast agent may be used.
  • an image in which a blood vessel region is imaged by another imaging device such as CTA (Computed Tomography Angiography) or D3 ⁇ 4A (Digital Subtraction Angiography).
  • the detection target can be detected by applying the present invention as long as it is not only an aneurysm but also a lesion having a spherical aneurysm.
  • each blood vessel part included in the blood vessel image on the target image is determined. Since the position and name information is attached to the target image as blood vessel part information, when performing MIP display of the target image, each blood vessel part can be easily determined based on the blood vessel part information. The identification display of each blood vessel site is possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site in the target image, and can improve the interpretation efficiency.
  • each blood vessel part can be identified and displayed, the doctor can easily identify the position and name of each blood vessel part, and can perform the interpretation work. Efficiency can be improved.
  • a target MIP image in which only the selected blood vessel part is extracted and displayed in response to a selection operation of an arbitrary blood vessel part among the identified blood vessel parts is generated and displayed.
  • the doctor can observe only the blood vessel site to be noticed. Therefore, duplication of a plurality of blood vessel sites can be eliminated, and observation can be performed by identifying a specific blood vessel site such as a site where multiple aneurysms occur.
  • the morphology of the major blood vessel sites (the length of the blood vessels, the direction of extension, the thickness, etc.) is generally the same for different subjects (patients), but the shape of the minor blood vessels that are not major varies depending on the individual. Therefore, the shape of the blood vessel part varies depending on the subject.
  • the covered The blood vessel part can be identified uniformly regardless of the subject, and is highly versatile.
  • the alignment is performed in two stages (based on the position of the center of gravity and based on rigid body deformation), it is possible to improve the determination accuracy for determining the blood vessel site.
  • the processing time for the rigid body deformation can be shortened, and the processing efficiency is good.
  • an MRI image obtained by another imaging method such as a contrast MRA image obtained by imaging a blood vessel using a contrast agent may be used.
  • an image obtained by imaging a blood vessel with another imaging apparatus such as CTA (Computed Tomograpny Angiography) or DSA (Digital subtraction Angiography) may be used.
  • the medical image processing apparatus according to the second embodiment has the same configuration as the medical image processing apparatus 10 according to the first embodiment, and only the operation is different. Therefore, the same components as those in the medical image processing apparatus 10 (see FIG. 1) according to the first embodiment are denoted by the same reference numerals, and the operation of the medical image processing apparatus 10 in the second embodiment will be described below.
  • FIG. 19 is a flowchart showing the detection process according to the second embodiment.
  • the MRA 3D image data is first input (step S101), and the 3D image data is preprocessed (step S102).
  • step S102 the image region of the 3D image data force blood vessel is extracted (step S 103). Since steps S101 to S103 are the same processing as steps Sl to 3 described with reference to FIG. 2 in the first embodiment, detailed description thereof is omitted here.
  • the primary candidate region of the cerebral aneurysm is detected by the GC filter bank using the extracted three-dimensional MRA image of the blood vessel region (step S104).
  • the GC filter bank is a combination of various filter processes and is divided into an analysis bank and a reconstruction bank.
  • the analysis bank is the original image (3D MRA image of the blood vessel region) Multi-resolution analysis is performed to create images with different resolution levels (hereinafter referred to as partial images! And weight images are created from these partial images.
  • the reconstruction bank weights each partial image with a weighted image, and then reconstructs the weighted partial image force original image.
  • FIG. 20 shows an analysis bank
  • the analysis bank uses a 3D MRA image of the blood vessel region as the original image S
  • Filter processing is performed in filter bank A (z j ), and partial images of each resolution level j are sequentially created.
  • filter bank A z j
  • partial images of each resolution level j are sequentially created.
  • Filter bank A (z j ) decomposes image S into partial images S, Wz, Wy, and Wx through filter processing with filters H (z j ) and G (z j ) as shown in FIG. is there.
  • S is
  • a smoothing filter H (z j ) is applied to each of x, y, and z! Smoothing
  • the filter H (z j ) is expressed by the following formula 7.
  • Equation 7 indicates z conversion (hereinafter, the same applies to Equations 8 to 10 indicating filters!).
  • the partial image Wz, Wy, Wx at each resolution level is filtered by the vector concentration filter GC, and the vector concentration at each resolution level is calculated. Is done. Since the method for calculating the vector concentration has been described above, a description thereof is omitted here.
  • the calculated vector concentration is input to the -Eural network NN.
  • the neural network NN outputs the output value in the range of 0 to 1 so that the higher the possibility of a cerebral aneurysm, the higher the possibility of a cerebral aneurysm, the lower the value is 0. Designed to be.
  • the determination unit NM When the output value is obtained from the neural network NN, the determination unit NM generates and outputs a weighted image V based on the output value.
  • the botacell value is set to 1 for a botacell having an output value larger than a certain threshold value (here, 0.8), and the botacell value is set to 0 for a botacell having an output value less than or equal to a threshold value 0.8.
  • a botacel for which a vector concentration degree exceeding the threshold value of 0.8 is calculated is a botacell constituting a region of a cerebral aneurysm.
  • the weighted image is created by binarizing the botacell values with the threshold as the boundary.
  • the weight image V is input to the reconstruction bank.
  • FIG. 22 is a diagram showing a reconfiguration bank.
  • the reconstruction bank weights each partial image s, Wz, Wy, Wx,
  • the original image S is reconstructed through the ruta bank S (z j ).
  • each partial image S, Wz, Wy, Wx is multiplied by the weight image V.
  • the value of 1 or 0 set for each botacell in the weighted image V is used as a weighting coefficient in the weighting process.
  • Each partial image multiplied by the weight image V is input to the filter bank S (z j ).
  • Filter bank S (z j ) is filtered by filters L (z j ), K (z j ), H (z j ) L (z j ) as shown in Fig. 23, and S, Wz , Wy, Wx and others reconstruct S.
  • S is subjected to filtering by the filter L (z j ) in the x, y, and z directions, respectively.
  • filter K (z j ) is applied in the X direction
  • filter H (z j ) L (z j ) is applied in the y and z directions.
  • Wz filter K (z j ) and z direction are applied in the y direction
  • the filter H (z j ) L (z j ) is applied, and for Wx, it is in the z direction! And filter) apply.
  • S is further filtered by filter bank S (z j_1 ).
  • the original image S is re-created by repeating such filtering.
  • the reconstruction of the original image S consists of the filters H (z j ) and G (z j o) in the filter banks A (z j ) and S (z j ).
  • the output image S that is output is only the image area that is likely to be a cerebral aneurysm.
  • the feature amount is calculated using the output image S (steps). S105).
  • the feature amount is calculated by calculating the average value of the size of the candidate region, the sphericity, and the vector concentration degree in each of the botasels in the region.
  • secondary detection is performed using the calculated feature quantity (step S106)
  • step S107 further tertiary detection is performed (step S107)
  • step S108 the final detection result is output (step S108). Since the processing of steps S105 to S108 is the same as steps S5 to S8 described with reference to FIG. 2, detailed description thereof is omitted.
  • the blood vessel part discrimination process is also performed in the second embodiment, but since the process contents are the same as those in the first embodiment, description of the process contents and effects is omitted.
  • the vector concentration degree is calculated for each resolution level from the partial image obtained by performing the multi-resolution analysis on the head image by the GC filter bank.
  • Each partial image is generated and multiplied to be weighted, and the original image is reconstructed from the weighted partial images. Therefore, only the region having a high vector concentration degree, that is, the region that is highly likely to be an aneurysm is reconstructed, and a reconstructed image in which only the candidate region of the aneurysm is displayed can be obtained.
  • the present invention can be used in the field of image processing, and can be applied to a medical image processing apparatus that performs image analysis and image processing of a head image obtained by a medical imaging apparatus.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

A lesion is accurately detected from a head image. A specific blood vessel portion can be intensively observed. A medical image processor (10) extracts a brain blood vessel region from an MRA image created by imaging a head, and a GC filter bank carries out a primary detection by using the image from which the brain blood vessel region is extracted. An analysis bank of the GC filter bank makes multiresolution analysis to calculate the vector concentration index from a partial image for each resolution level. A weighted image in which voxels having a vector concentration index above a threshold are given a weight of 1 and those having vector concentration index equal to or less than the threshold are given a weight of 0. A reconstruction bank carries out weighting by multiplying each partial image by the weighted image and reconstructs the original image by using the weighted partial images. The averages of the size (volume), spherity, and vector concentration index of the blood vessel region reproduced in the reconstructed image are calculated, and secondary detection is conducted by the rule base method. Tertiary detection of the secondary detection candidate is conducted by the judgment analysis. The result of the detection is outputted and displayed on a display section (13).

Description

明 細 書  Specification
医用画像処理装置及び画像処理方法  Medical image processing apparatus and image processing method
技術分野  Technical field
[0001] 本発明は、患者の頭部を撮影して得られた頭部画像の画像解析、画像処理を行う 医用画像処理装置及び画像処理方法に関する。  The present invention relates to a medical image processing apparatus and an image processing method for performing image analysis and image processing of a head image obtained by imaging a patient's head.
背景技術  Background art
[0002] 近年、磁気共鳴撮景装置(以下、 MRI ; Magnetic Resonance Imagingと!、う)の普及 と高性能化に伴い、脳ドック検査の件数が急速に増加してきている。脳ドック検査の 目的の一つは、血管において生じる未破裂脳動脈瘤を早期に発見し、適切な処置 や治療を行うことによって、脳動脈瘤の破裂によるくも膜下出血等、重篤な疾患の発 病を防止することにある。  In recent years, with the spread and high performance of magnetic resonance imaging devices (hereinafter referred to as MRI; Magnetic Resonance Imaging!), The number of brain dock examinations has increased rapidly. One of the purposes of the brain dock test is to detect unruptured cerebral aneurysms that occur in blood vessels at an early stage and take appropriate measures and treatments to prevent serious diseases such as subarachnoid hemorrhage due to ruptured cerebral aneurysms. It is to prevent disease.
[0003] 医師による未破裂脳動脈瘤の検出は、 MRIで血管内の血液の流れを画像ィ匕した MRA画像 (Magnetic Resonance Angiography)を用いて行われる。通常、医師による 読影時には 3次元画像データを様々な角度から MIP処理(Maximum Intensity Proje ction ;最大値輝度投影法)によって 2次元化した画像が用いられる力 血管に生じる 未破裂脳動脈瘤は小さいため、重なって表示される周囲の血管像との識別が必要と なり、医師の疲労が激しい。また、この疲労による見落としの可能性も考えられる。  [0003] Detection of an unruptured cerebral aneurysm by a doctor is performed using an MRA image (Magnetic Resonance Angiography) in which the blood flow in the blood vessel is imaged by MRI. Usually, when a doctor interprets a 3D image data from various angles, a 2D image is used by MIP processing (Maximum Intensity Projection). Because the unruptured cerebral aneurysm that occurs in the blood vessel is small Therefore, it is necessary to distinguish from the surrounding blood vessel images that are displayed in an overlapping manner, and the doctor's fatigue is severe. There is also the possibility of oversight due to this fatigue.
[0004] 従来から、このような医師の診断支援を行うため、画像処理によって病変部分の画 像領域を検出する装置が開発されている (例えば、特許文献 1、非特許文献 1、 2参 照)。この装置は、一般に CAD (Computer- Aided Diagnosis)と呼ばれている。  [0004] Conventionally, in order to support such a doctor's diagnosis, an apparatus for detecting an image area of a lesion portion by image processing has been developed (for example, see Patent Document 1, Non-Patent Documents 1 and 2). ). This apparatus is generally called CAD (Computer-Aided Diagnosis).
[0005] 一方、読影時に MIPによる画像を医用画像処理装置において生成し、表示すると き、視線方向によっては複数の血管が重複して表示される場合がある。この場合、小 さな未破裂動脈瘤を検出するには、医師が注目したい血管像と周囲の血管像との識 別が必要なために、多くの読影時間を必要とし、医師の疲労も激しい。疲労によって 本来検出すべき未破裂動脈瘤を見落とす可能性がある。  On the other hand, when an image by MIP is generated and displayed in a medical image processing apparatus at the time of interpretation, a plurality of blood vessels may be displayed in an overlapping manner depending on the viewing direction. In this case, in order to detect a small unruptured aneurysm, it is necessary to distinguish between the blood vessel image that the doctor wants to focus on and the surrounding blood vessel image. . Fatigue may overlook an unruptured aneurysm that should be detected.
[0006] 動脈瘤が多発する血管部位は、中大脳動脈分岐部、前交通動脈、内頸動脈後交 通動脈分岐部等であることが知られている。従って、これらの多発部位を詳しく観察 するには、医師が関心のある血管部位の血管像のみに注目したターゲット MIP画像 を生成すればよい。 [0006] It is known that blood vessel sites where aneurysms frequently occur are the middle cerebral artery bifurcation, the anterior communicating artery, the internal carotid artery posterior communicating artery bifurcation, and the like. Therefore, observe these multiple sites in detail. To do this, it is only necessary to generate a target MIP image that focuses only on the blood vessel image of the blood vessel region of interest to the doctor.
[0007] また、複数の血管部位が交差する領域においては、視線方向に沿った深さ情報を 用いて当該深さ情報が大きい方の血管像、つまり視線方向において後方側にある血 管部位の交差部分の輝度を低下させて表示する技術も開示されている (例えば、特 許文献 2参照)。この方法によれば、血管像に遠近感をもたせることができ、手前側の 血管部位を観察しやすくなる。  [0007] Further, in a region where a plurality of blood vessel parts intersect, using the depth information along the line-of-sight direction, the blood vessel image with the larger depth information, that is, the blood vessel part on the rear side in the line-of-sight direction. A technique for displaying by reducing the luminance of the intersection is also disclosed (for example, see Patent Document 2). According to this method, it is possible to give a sense of perspective to the blood vessel image, and it becomes easy to observe the blood vessel site on the near side.
特許文献 1 :特開 2002— 112986号公報  Patent Document 1: Japanese Patent Laid-Open No. 2002-112986
特許文献 2:特開平 5 - 277091号公報  Patent Document 2: JP-A-5-277091
非特許文献 1:林則夫他、モルフォルジー処理を利用した頭部 MRI画像における小 脳および脳患部の自動抽出法、医用画像情報学会雑誌、 vol21.nol.ppl09-115,200 4  Non-patent document 1: Norio Hayashi et al., Head MRI image using morphological processing, Automatic extraction of cerebellum and brain affected area, Journal of Medical Image Information Society, vol21.nol.ppl09-115,200 4
非特許文献 2 :横山龍ニ郎他、脳 MR画像におけるラクナ梗塞領域の自動検出、 日 本放射線技術学会雑誌、 58(3),399-405,2002  Non-Patent Document 2: Ryuro Yokoyama et al., Automatic detection of lacunar infarct area in brain MR images, Journal of Japanese Society of Radiological Technology, 58 (3), 399-405, 2002
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0008] しかしながら、未破裂脳動脈瘤を血管組織と識別して精度良く検出するような手法 は 、まだ提案されて!、な 、。  [0008] However, a method for accurately detecting an unruptured cerebral aneurysm by distinguishing it from a vascular tissue has not yet been proposed!
[0009] また、従来の医用画像処理装置では、血管像中の各血管部位を自動判別すること ができないため、上記のターゲット MIP画像を自動で作成するためには、 3次元の画 像データ力 関心のある血管部位の血管像のみを手動で指定する操作が必要となる 。このような操作は容易ではなぐ作成に時間を要する。  [0009] In addition, since the conventional medical image processing apparatus cannot automatically determine each blood vessel part in the blood vessel image, in order to automatically create the target MIP image, a three-dimensional image data power is required. It is necessary to manually specify only the blood vessel image of the blood vessel site of interest. Such an operation is not easy and requires time.
[0010] また、上記特許文献 2に記載の方法は観察者に近い側の血管部位を観察しやすく するものである。従って、医師が観察したい視線方向に応じて作成された MIP画像 において、医師の関心がある血管部位の手前側に別の血管部位が交差している場 合はそれを除去することができず、所望の観察方向から所望の血管部位を詳細に観 察できるとは限らない。  [0010] In addition, the method described in Patent Document 2 makes it easier to observe a blood vessel portion closer to the observer. Therefore, in the MIP image created according to the line-of-sight direction that the doctor wants to observe, if another blood vessel part intersects the front side of the blood vessel part that the doctor is interested in, it cannot be removed. It is not always possible to observe the desired blood vessel site in detail from the desired observation direction.
[0011] 本発明の課題は、頭部画像力も精度良く病変部を検出することである。また、特定 の血管部位に注目して観察することを可能とすることである。 [0011] An object of the present invention is to detect a lesion with high accuracy in the head image force. Also specific It is possible to observe while paying attention to the blood vessel site.
課題を解決するための手段  Means for solving the problem
[0012] 請求の範囲第 1項に記載の発明は、医用画像処理装置において、  [0012] The invention described in claim 1 is a medical image processing apparatus,
頭部画像を多重解像度解析し、解像度レベル毎に分解した部分画像を用いて、解 像度レベル毎にベクトル集中度を算出する分析手段と、  Analysis means for performing multi-resolution analysis of the head image and calculating a vector concentration degree for each resolution level using partial images decomposed for each resolution level;
前記部分画像カゝら元の頭部画像を再構成するにあたり、前記算出されたベクトル 集中度が所定値となる病変部の候補領域でのみ元の画像を再現する再構成手段と 前記再構成された頭部画像を用いて、当該頭部画像に再現された病変部の候補 領域のうち、正常な血管である偽陽性候補領域を削除する削除手段と、  When reconstructing the original head image from the partial image, reconstructing means for reproducing the original image only in a candidate region of a lesion part where the calculated vector concentration degree is a predetermined value; Delete means for deleting a false positive candidate region that is a normal blood vessel from candidate regions of a lesion portion reproduced in the head image using
を備えることを特徴とする。  It is characterized by providing.
[0013] 請求の範囲第 8項に記載の発明は、請求の範囲第 1項に記載の医用画像処理装 ¾【こ; i l /、て、 [0013] The invention according to claim 8 is the medical image processing apparatus according to claim 1, wherein:
前記頭部画像から血管像を抽出する抽出手段と、  Extracting means for extracting a blood vessel image from the head image;
前記抽出された血管像に含まれる一又は複数の血管部位を判別し、当該判別され た血管部位に関する血管部位情報を前記頭部画像に付帯させる画像制御手段と、 を備えることを特徴とする。  Image control means for discriminating one or a plurality of blood vessel sites included in the extracted blood vessel image and attaching blood vessel site information relating to the discriminated blood vessel site to the head image.
発明の効果  The invention's effect
[0014] 請求の範囲第 1項〜第 4項、第 14項〜第 17項に記載の発明によれば、頭部画像 に対して多重解像度解析を行って得られた部分画像によって解像度レベル毎にベタ トル集中度を算出する。解像度レベル毎にベクトル集中度を算出することにより、様 々な大きさの動脈瘤の検出に対応することができ、より精度の高い検出処理が可能と なる。また、瘤等の病変部はある一定の高い値のベクトル集中度を呈するが、本発明 によればこのような病変部である可能性が高い領域のみ再現されるよう再構成を行う ので、病変部の候補領域のみ画像ィ匕した再構成画像を得ることができる。再構成画 像を用いて特徴量の算出を行うことにより、算出する特徴量力も候補領域以外の画 像要素を排除することができ、候補領域についての特徴量を正確に算出することが 可能となる。従って、検出処理自体の精度をさらに向上させることが可能となる。 [0015] 請求の範囲第 5項〜第 7項、第 18項〜第 20項に記載の発明によれば、実質的に ベクトル集中度を算出する領域を、瘤の病変部が存在する血管領域に絞ることがで き、演算時間の短縮ィ匕を図ることが可能となる。 [0014] According to the invention described in claims 1 to 4 and 14 to 17, according to the partial image obtained by performing the multi-resolution analysis on the head image, Calculate the concentration of beta. By calculating the vector concentration for each resolution level, it is possible to cope with detection of aneurysms of various sizes, and detection processing with higher accuracy is possible. In addition, a lesion such as an aneurysm exhibits a certain high value of vector concentration, but according to the present invention, reconstruction is performed so that only a region that is highly likely to be a lesion is reproduced. It is possible to obtain a reconstructed image in which only the candidate areas are imaged. By calculating the feature value using the reconstructed image, the calculated feature value force can also exclude image elements other than the candidate region, and the feature value for the candidate region can be calculated accurately. Become. Therefore, it is possible to further improve the accuracy of the detection process itself. [0015] According to the inventions of claims 5 to 7, and 18 to 20, the region in which the vector concentration degree is substantially calculated is the blood vessel region in which the lesion part of the aneurysm exists. This makes it possible to reduce the calculation time.
[0016] 請求の範囲第 8項、第 21項に記載の発明によれば、対象画像に付帯された血管 部位情報を参照することにより、対象画像の血管像に含まれる一又は複数の血管部 位を容易に判別することが可能となる。血管部位を判別できれば、例えば対象画像 を表示する際にそれぞれの血管部位を識別表示する等、対象画像を利用する際に 対象画像に含まれる各血管部位を特定し、その情報を医師に提供することが可能と なる。従って、医師は特定の血管部位に注目して対象画像を観察することが可能と なり、読影効率の向上を図ることができる。  [0016] According to the inventions of claims 8 and 21, by referring to the blood vessel part information attached to the target image, one or a plurality of blood vessel portions included in the blood vessel image of the target image It is possible to easily determine the position. If the blood vessel part can be identified, for example, each blood vessel part is identified and displayed when the target image is displayed, and each blood vessel part included in the target image is specified when the target image is used, and the information is provided to the doctor. It becomes possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site, and can improve the interpretation efficiency.
[0017] 請求の範囲第 9項、第 22項に記載の発明によれば、対象画像に付帯された血管 部位情報を参照することにより、対象画像の血管像に含まれる各血管部位の位置及 び名称の判別が容易となる。血管部位の位置及び名称を判別することにより、対象 画像を利用する際に対象画像に含まれる各血管部位の位置及び名称を特定し、そ の情報を医師に提供することが可能となる。従って、医師は対象画像における特定 の血管部位の位置及び名称を容易に把握することが可能となる。  [0017] According to the inventions described in claims 9 and 22, by referring to the blood vessel part information attached to the target image, the position and position of each blood vessel part included in the blood vessel image of the target image are referred to. And the name can be easily identified. By determining the position and name of the blood vessel part, the position and name of each blood vessel part included in the target image can be specified when the target image is used, and the information can be provided to the doctor. Therefore, the doctor can easily grasp the position and name of a specific blood vessel site in the target image.
[0018] 請求の範囲第 10項、第 23項に記載の発明によれば、対象画像は被写体 (患者)に よってその血管像の形態に個体差が生ずるが、ァフィン変換によってそれら血管像 が略一致するように位置合わせすることにより、参照画像と対象画像の血管像を精度 よく対応させることが可能となる。よって、被写体の個体差に拘わらず血管部位を判 別することができ、汎用性が高い。  [0018] According to the inventions of claims 10 and 23, the target image has individual differences in the form of the blood vessel image depending on the subject (patient), but the blood vessel image is substantially reduced by affine transformation. By aligning them so that they match, the reference image and the blood vessel image of the target image can be associated with each other with high accuracy. Therefore, it is possible to distinguish a blood vessel site regardless of individual differences of subjects, and the versatility is high.
[0019] 請求の範囲第 11項、第 24項に記載の発明によれば、医師は対象画像の血管像に 含まれる一又は複数の血管部位のそれぞれを容易に識別することができる。  [0019] According to the inventions described in claims 11 and 24, the doctor can easily identify each of one or a plurality of blood vessel portions included in the blood vessel image of the target image.
[0020] 請求の範囲第 12項、第 25項に記載の発明によれば、医師は対象画像の血管像に 含まれる一又は複数の血管部位のそれぞれの名称を容易に把握することができる。  [0020] According to the inventions of claims 12 and 25, the doctor can easily grasp the names of one or a plurality of blood vessel parts included in the blood vessel image of the target image.
[0021] 請求の範囲第 13項、第 26項に記載の発明によれば、医師が観察したい特定の血 管部位の血管像のみを対象画像力 抽出して観察することが可能となる。血管像は 複数の血管部位が重なって表示されることがあるため、その重複部分の観察がしづ らくなる。よって、医師により選択された特定の血管部位のみを表示することにより、 血管部位の重複を解消することができ、医師が読影しやす ヽ環境を提供することが できる。 [0021] According to the inventions described in claims 13 and 26, it is possible to extract and observe only a blood vessel image of a specific blood vessel site that a doctor wants to observe. Since a blood vessel image may be displayed by overlapping multiple blood vessel parts, it is difficult to observe the overlapping part. It becomes easy. Therefore, by displaying only the specific blood vessel part selected by the doctor, it is possible to eliminate duplication of the blood vessel part and provide an environment where the doctor can easily interpret.
図面の簡単な説明 Brief Description of Drawings
圆 1]実施の形態に係る医用画像処理装置の内部構成を示す図である。 圆 1] A diagram showing an internal configuration of the medical image processing apparatus according to the embodiment.
圆 2]医用画像処理装置により実行される検出処理の流れを説明するフローチャート である。 圆 2] This is a flowchart for explaining the flow of detection processing executed by the medical image processing apparatus.
[図 3A]MRA画像例を示す図である。  FIG. 3A is a diagram showing an example of an MRA image.
圆 3B]血管領域を抽出した抽出画像例を示す図である。 FIG. 3B is a diagram showing an example of an extracted image obtained by extracting a blood vessel region.
[図 4]ベクトル集中度フィルタを示す図である。  FIG. 4 is a diagram showing a vector concentration filter.
圆 5]脳動脈瘤モデル、血管領域モデルを示す図である。 [5] A diagram showing a cerebral aneurysm model and a blood vessel region model.
[図 6]ベクトル集中度フィルタの出力画像例を示す図である。  FIG. 6 is a diagram showing an output image example of a vector concentration filter.
[図 7A]閾値処理前のフィルタ処理画像を示す図である。  FIG. 7A is a diagram showing a filtered image before threshold processing.
[図 7B]閾値処理後のフィルタ処理画像を示す図である。  FIG. 7B is a diagram showing a filtered image after threshold processing.
圆 8]球形度の算出方法を説明する図である。 [8] FIG. 8 is a diagram for explaining a method of calculating sphericity.
[図 9A]ルールベース法による識別方法について説明する図である。  FIG. 9A is a diagram for explaining an identification method based on a rule-based method.
[図 9B]ルールベース法による識別方法について説明する図である。  FIG. 9B is a diagram illustrating an identification method based on a rule-based method.
圆 10]脳動脈瘤候補の検出結果の出力例を示す図である。 FIG. 10 is a diagram showing an output example of a detection result of a cerebral aneurysm candidate.
圆 11]医用画像処理装置により実行される血管部位判別処理を説明するフローチヤ ートである。 [11] This is a flow chart for explaining the blood vessel part discrimination processing executed by the medical image processing apparatus.
圆 12A]参照画像例を示す図である。 圆 12A] is a diagram showing an example of a reference image.
[図 12B]参照画像の作成に用いた元画像を示す図である。  FIG. 12B is a diagram showing an original image used to create a reference image.
[図 13A]対象画像及び正規化処理を施す前後における対象画像のヒストグラムを示 す図である。  FIG. 13A is a diagram showing a target image and a histogram of the target image before and after performing normalization processing.
[図 13B]図 13Aとは異なる被写体の対象画像及び正規ィ匕処理を施す前後における 対象画像のヒストグラムを示す図である。  FIG. 13B is a diagram showing a target image of a subject different from that in FIG. 13A and a histogram of the target image before and after performing normal image processing.
[図 14A]対象画像を示す図である。 FIG. 14A is a diagram showing a target image.
圆 14B]図 14Aの対象画像カゝら血管を抽出した血管抽出画像を示す図である。 [図 15A]血管抽出画像と参照画像を示す図である。 FIG. 14B is a diagram showing a blood vessel extraction image obtained by extracting blood vessels from the target image shown in FIG. 14A. FIG. 15A is a diagram showing a blood vessel extraction image and a reference image.
[図 15B]図 15Aに示す血管抽出画像と参照画像とを重ね合わせた図である。  FIG. 15B is a diagram in which the blood vessel extraction image and the reference image shown in FIG. 15A are superimposed.
[図 16A]参照画像におけるランドマークを示す図である。  FIG. 16A is a diagram showing landmarks in a reference image.
[図 16B]血管抽出画像における対応点を示す図である。  FIG. 16B is a diagram showing corresponding points in a blood vessel extraction image.
[図 17A]血管抽出画像とその血管部位の判別結果を示す図である。  FIG. 17A is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
[図 17B]血管抽出画像とその血管部位の判別結果を示す図である。  FIG. 17B is a diagram showing a blood vessel extraction image and a discrimination result of the blood vessel site.
[図 18]対象画像において判別された各血管部位の識別表示例を示す図である。  FIG. 18 is a diagram showing an example of identification display of each blood vessel part discriminated in the target image.
[図 19]第 2実施形態に係る検出処理を示すフローチャートである。  FIG. 19 is a flowchart showing a detection process according to the second embodiment.
[図 20]GCフィルタバンクの分析バンクを示す図である。  FIG. 20 is a diagram showing an analysis bank of a GC filter bank.
[図 21]フィルタバンク A (zj)を示す図である。 FIG. 21 is a diagram showing a filter bank A (z j ).
[図 22]GCフィルタバンクの再構成バンクを示す図である。  FIG. 22 is a diagram showing a reconfiguration bank of a GC filter bank.
[図 2¾フィルタバンク S (zj)を示す図である。 FIG. 2 is a diagram showing a filter bank S (z j ).
符号の説明  Explanation of symbols
[0023] 10 医用画像処理装置 [0023] 10 medical image processing apparatus
11 制御部  11 Control unit
12 操作部  12 Operation unit
13 表示部  13 Display
14 通信部  14 Communications department
15 記憶部  15 Memory
16 病変候補検出部  16 Lesion candidate detection unit
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0024] 〈第 1実施形態〉 <First Embodiment>
まず、構成を説明する。  First, the configuration will be described.
図 1に、本実施形態における医用画像処理装置 10の構成を示す。  FIG. 1 shows the configuration of the medical image processing apparatus 10 in the present embodiment.
この医用画像処理装置 10は、検査撮影により得られた医用画像を画像解析するこ とにより、当該医用画像力 病変部の候補領域を検出するものである。  The medical image processing apparatus 10 detects a candidate region of the medical image force lesion by performing image analysis on a medical image obtained by examination imaging.
なお、この医用画像処理装置 10を、医用画像を生成する画像生成装置や医用画 像を保存'管理するサ一ノ^医師の読影に付すため、サーバに保存された医用画像 を取り寄せて表示手段上に表示する読影端末等、各種装置がネットワークを介して 接続された医用画像システムに設けることとしてもよい。また、本実施形態では医用 画像処理装置 10単体で本発明を実現する例を説明するが、医用画像処理装置 10 における機能を上記医用画像システムの各構成装置に分散させて医用画像システ ム全体で本発明を実現することとしてもょ 、。 Note that this medical image processing device 10 is attached to an image generation device that generates medical images and a medical doctor that stores and manages medical images. It is good also as providing in a medical image system to which various apparatuses, such as an image interpretation terminal which obtains and displays on a display means, are connected via the network. In the present embodiment, an example in which the present invention is realized by a single medical image processing apparatus 10 will be described. However, the functions of the medical image processing apparatus 10 are distributed to each component of the medical image system, and the entire medical image system is used. As a realization of the present invention.
[0025] 以下、医用画像処理装置 10の各部について説明する。  Hereinafter, each unit of the medical image processing apparatus 10 will be described.
医用画像処理装置 10は、図 1に示すように、制御部 11、操作部 12、表示部 13、通 信部 14、記憶部 15、病変候補検出部 16を備えて構成されている。  As shown in FIG. 1, the medical image processing apparatus 10 includes a control unit 11, an operation unit 12, a display unit 13, a communication unit 14, a storage unit 15, and a lesion candidate detection unit 16.
[0026] 制御部 11は、 CPU (Central Processing Unit)、 RAM (Random Access Memory [0026] The control unit 11 includes a CPU (Central Processing Unit) and a RAM (Random Access Memory).
)等を備えて構成されており、記憶部 15に格納されて ヽる各種制御プログラムを読み 出して各種演算を行うとともに、各部 12〜 16における処理動作を統括的に制御する ), Etc., which reads various control programs stored in the storage unit 15 and performs various calculations, and comprehensively controls processing operations in each unit 12-16
[0027] 操作部 12は、キーボードやマウス等を備え、オペレータによりこれらが操作されると 、その操作に応じた操作信号を生成して制御部 11に出力する。なお、表示部 13に おけるディスプレイと一体に構成したタツチパネルを備えることとしてもよい。 The operation unit 12 includes a keyboard, a mouse, and the like. When these are operated by an operator, an operation signal corresponding to the operation is generated and output to the control unit 11. Note that a touch panel configured integrally with the display in the display unit 13 may be provided.
[0028] 表示部 13は、 LCD (Liquid Crystal Display)等の表示手段を備え、制御部 11から の指示に応じてこの表示手段上に、各種操作画面や、医用画像、医用画像力 検出 された病変候補の検出結果、その検出情報等の各種表示情報を表示させる。  [0028] The display unit 13 includes display means such as an LCD (Liquid Crystal Display), and various operation screens, medical images, and medical image forces are detected on the display means in response to instructions from the control unit 11. Various display information such as the detection result of the lesion candidate and its detection information is displayed.
[0029] 通信部 14は、通信用のインターフェイスを備え、ネットワーク上の外部装置と情報 の送受信を行う。例えば、通信部 14は画像生成装置から生成された医用画像を受 信する、医用画像処理装置 10における病変候補の検出情報を読影端末に送信する 等の通信動作を行う。  [0029] The communication unit 14 includes a communication interface, and transmits and receives information to and from an external device on the network. For example, the communication unit 14 performs a communication operation such as receiving a medical image generated from the image generation device and transmitting detection information of a lesion candidate in the medical image processing device 10 to an interpretation terminal.
[0030] 記憶部 15は、制御部 11において用いられる制御プログラム、病変候補検出部 16 において用いられる検出処理等の各種処理プログラムの他、各プログラムの実行に 必要なパラメータやその処理結果等のデータが記憶されている。  [0030] The storage unit 15 is a control program used in the control unit 11, various processing programs such as detection processing used in the lesion candidate detection unit 16, and data such as parameters necessary for the execution of each program and processing results thereof. Is remembered.
また、記憶部 15は、病変候補の検出対象である医用画像やその検出結果の情報 等を記憶している。  In addition, the storage unit 15 stores medical images that are candidates for detection of lesion candidates, information on detection results, and the like.
[0031] 病変候補検出部 16は、記憶部 15に記憶されている処理プログラムとの協働により 、処理対象画像に必要に応じて各種画像処理 (階調変換処理、鮮鋭性調整処理、 ダイナミックレンジ圧縮処理等)を施す。また、病変候補検出部 16は検出処理を実行 し、その検出結果を出力する。検出処理の内容については、後述する。 [0031] The lesion candidate detection unit 16 cooperates with the processing program stored in the storage unit 15. The image to be processed is subjected to various image processing (gradation conversion processing, sharpness adjustment processing, dynamic range compression processing, etc.) as necessary. Further, the lesion candidate detection unit 16 executes detection processing and outputs the detection result. The contents of the detection process will be described later.
[0032] 次に、上記医用画像処理装置 10による病変候補の検出処理について説明する。  Next, lesion candidate detection processing by the medical image processing apparatus 10 will be described.
本実施形態では、 MRIにより患者の頭部を撮影し、脳内血管の血流を画像ィ匕した MRA画像 (3次元画像)から、未破裂の脳動脈瘤の病変候補を検出する例を説明す る。脳動脈瘤は、動脈の壁内にできる膨隆 (拡張)のことであり、動脈壁に血流による 圧力が力かることにより生じるものである。脳動脈瘤の内部では血栓が生じやすぐこ の脳動脈瘤が破裂すると、くも膜下出血等の重篤な疾患を発症することとなる。  In this embodiment, an example of detecting a lesion candidate for an unruptured cerebral aneurysm from an MRA image (three-dimensional image) obtained by imaging the patient's head using MRI and imaging the blood flow in the brain is explained. The A cerebral aneurysm is a bulge (expansion) that forms in the wall of an artery, and is caused by the blood pressure exerted on the artery wall. If a thrombus occurs inside the cerebral aneurysm or this cerebral aneurysm ruptures, serious diseases such as subarachnoid hemorrhage may develop.
[0033] 図 2は、検出処理の流れを説明するフローチャートである。この検出処理は、前述し たように、病変候補検出部 16が記憶部 15に記憶される検出処理プログラムを読み込 むことにより実行される処理である。  FIG. 2 is a flowchart for explaining the flow of detection processing. As described above, this detection process is a process executed when the lesion candidate detection unit 16 reads a detection processing program stored in the storage unit 15.
図 2に示すように、検出処理ではまず MRAの 3次元画像データの入力が行われる (ステップ Sl)。具体的には、病変候補検出部 16により記憶部 15に記憶されていた 処理対象の MRA画像の読み込みが行われる。  As shown in Fig. 2, in the detection process, MRA 3D image data is first input (step Sl). Specifically, the MRA image to be processed stored in the storage unit 15 is read by the lesion candidate detection unit 16.
[0034] ここで、 MRIにより得られる画像について説明する。  [0034] Here, an image obtained by MRI will be described.
MRIは、磁場内における核磁気共鳴(以下、 NMR; Nuclear Magnetic Resonance) を利用して画像を得る方法である。  MRI is a method for obtaining an image using nuclear magnetic resonance (hereinafter referred to as NMR) in a magnetic field.
NMRでは、被検体を静磁場中に置き、その後、被検体において検出対象とする原 子核の共鳴周波数の RFパルス (電波)を照射する。医用上は、通常、人体に多く存 在する水を構成する水素原子の共鳴周波数が用いられる。被検体に RFノ ルスが照 射されると、励起現象が生じ、共鳴周波数に共鳴する原子の原子核スピンの位相が そろうとともに、原子核スピンが RFパルスのエネルギーを吸収する。この励起状態で RFパルスの照射を止めると、緩和現象を生じ、原子核スピンの位相が不均一化する ととも〖こ、原子核スピンがエネルギーを放出する。この位相の緩和の時定数が T2、ェ ネルギ一の緩和の時定数が T1である。  In NMR, an object is placed in a static magnetic field, and then an RF pulse (radio wave) at the resonance frequency of the atomic nucleus to be detected in the object is irradiated. For medical purposes, the resonance frequency of the hydrogen atom that constitutes the water that is abundant in the human body is usually used. When the object is irradiated with RF noise, an excitation phenomenon occurs, the nuclear spins of the atoms that resonate with the resonance frequency are aligned, and the nuclear spins absorb the energy of the RF pulse. When the irradiation of the RF pulse is stopped in this excited state, a relaxation phenomenon occurs and the phase of the nuclear spin becomes nonuniform, and the nuclear spin releases energy. The phase relaxation time constant is T2, and the energy relaxation time constant is T1.
[0035] MRIでは、撮像方法を変えることにより、検査目的に応じた様々な種類の画像を得 ることが可能である。例えば、繰り返し時間 TR、エコー時間 TEについて、 TR=T1、 TE< <T2と撮影条件を調整したものを Tl強調画像、 TR> >T1、 ΤΕ=Τ2と撮影 条件を調整したものを Τ2強調画像というが、両者は画像のコントラストに差異があり、 主に T1強調画像は解剖構造の検出に、 Τ2強調画像は病変部の検出に用いられる 。また、 FLAIR法により撮影された画像は水からの信号を減衰させた Τ2強調画像で あるが、特に FLAIR画像と呼ばれる。 [0035] In MRI, it is possible to obtain various types of images according to the inspection purpose by changing the imaging method. For example, for repetition time TR and echo time TE, TR = T1, TE <<T2 and shooting conditions adjusted are Tl weighted images, TR>> T1, and ΤΕ = Τ2 and shooting conditions adjusted are called Τ2 weighted images. The T1-weighted image is used to detect anatomical structures, and the Τ2-weighted image is used to detect lesions. In addition, images taken by the FLAIR method are 強調 2 weighted images in which signals from water are attenuated, but are particularly called FLAIR images.
[0036] MRAは、 MRIにおける血管撮像方法を!、う。 MRIでは、被検体の足から頭の方向  [0036] MRA is a blood vessel imaging method in MRI! In MRI, the direction of the subject's foot to head
(この方向を体軸という)に傾斜磁場をかけることにより特定のスライス(断層)にのみ エネルギーを吸収させることが可能である。撮像時には、スライス内の血管では RFパ ルスで血液が飽和された状態となっている力 血管は常に血液の流れがあるため経 時により非飽和の血流が流入するとそのスライスにおける信号強度が増加する。 MR Aは、この高信号を画像ィ匕することにより血流のある血管を画像ィ匕する方法である。  By applying a gradient magnetic field (this direction is called the body axis), it is possible to absorb energy only in a specific slice (fault). At the time of imaging, blood is saturated with RF pulses in the blood vessels in the slice. Since blood flows constantly in the blood vessels, the signal strength in the slice increases when unsaturated blood flows in over time. To do. MR A is a method of imaging a blood vessel with blood flow by imaging this high signal.
[0037] 図 3Aに、 MRA画像例を示す。  FIG. 3A shows an example of MRA image.
図 3Aに示すように、血流のある血管領域は高信号となるので、 MRA画像では血 管領域は白く表れている。  As shown in Fig. 3A, the blood vessel region with blood flow has a high signal, so the blood vessel region appears white in the MRA image.
[0038] このような MRA画像の 3次元データが入力されると、当該 3次元画像データに対し 、候補を検出する準備段階として前処理が施される (ステップ S 2)。前処理としては、 画像データの正規化処理、階調変換処理が行われる。正規化は、ボタセルを構成す る全ての辺が等サイズの 3次元画像データとなるように、線形補間法により変換するこ とにより行われる。そして、この等サイズのボタセルに変換された 3次元画像データに 対し、濃度階調変換処理が施され、各ボタセルの持つ信号値が 0〜1024の濃度階 調に線形変換される。このとき、高信号値であるほど濃度値 1024に近ぐ低信号値 であるほど濃度値 0に近くなるように変換される。なお、濃度階調のレンジは、 0〜10 24に限らず適宜設定可能である。  [0038] When such 3D data of the MRA image is input, the 3D image data is preprocessed as a preparation stage for detecting candidates (step S2). As preprocessing, normalization processing and gradation conversion processing of image data are performed. Normalization is performed by converting by linear interpolation so that all the edges that make up the botacell become 3D image data of the same size. Then, the density gradation conversion process is performed on the three-dimensional image data converted into the equal-sized button cells, and the signal value of each button cell is linearly converted into a density gradation of 0 to 1024. At this time, the higher the signal value, the closer to the density value 1024, and the lower the signal value, the closer the density value to 0. The density gradation range is not limited to 0 to 24 and can be set as appropriate.
[0039] 前処理が終了すると、 3次元 MRA画像から血管の画像領域が抽出される (ステツ プ S3)。まず、閾値処理が施され、 MRA画像の 2値化が行われる。一般的に、 MRA 画像では図 3Aに示すように血管領域は白ぐその他の領域は黒っぽく表れるため、 2値ィ匕画像では血管領域はその他の領域と異なる値となる。よって、領域拡張法によ り血管領域の抽出を行う。まず、 2値ィ匕画像を用いて始点となるボタセル (最も白く高 濃度値のボタセル)を決定し、 2値化処理前の 3次元 MRA画像においてその始点と 決定されたボタセルの近傍 26ボタセルを調べ、ある判定条件 (例えば、濃度値が 50 0以上であること)を満たす近傍ボタセルを血管領域と判断する。そして、この血管領 域と判断された近傍ボタセルについても上記と同様の処理を繰り返す。このように、 領域を拡張しながら判定条件を満たすボタセルを順次抽出することにより、血管領域 を抽出することができる。図 3Bに、図 3Aの MRA画像力 抽出した血管領域を白(濃 度値 1024)、その他の領域を黒 (濃度値 0)で 2値ィ匕した図を示す。 [0039] When the preprocessing is completed, a blood vessel image region is extracted from the three-dimensional MRA image (step S3). First, threshold processing is performed, and the MRA image is binarized. In general, in the MRA image, as shown in FIG. 3A, the blood vessel region is whitened and the other regions appear black. In the binary image, the blood vessel region has a different value from the other regions. Therefore, the blood vessel region is extracted by the region expansion method. First, using the binary image, the starting botasel (the whitest Determine the density value (botacel), and examine the 26-botel in the vicinity of the determined botasel in the 3D MRA image before binarization, and determine certain judgment conditions (for example, the density value is 500 or more) Neighboring bocellels satisfying the condition are determined as blood vessel regions. Then, the same processing as described above is repeated for the neighboring buttonacell determined to be the blood vessel region. In this way, the blood vessel region can be extracted by sequentially extracting the botacell satisfying the determination condition while expanding the region. Fig. 3B shows the MRA image force extracted from Fig. 3A. The extracted blood vessel region is white (concentration value 1024) and the other regions are black (concentration value 0).
[0040] 次いで、抽出された血管領域の 3次元 MRA画像に対し、図 4に示すようなベクトル 集中度フィルタを用いたフィルタ処理が行われ、フィルタ処理により出力された処理 画像から脳動脈瘤の 1次候補領域の検出が行われる (ステップ S4)。ベクトル集中度 フィルタは、各ボタセルにおけるベクトル集中度を算出し、当該算出されたベクトル集 中度の値をそのボタセルのボタセル値として画像化し出力するものである。ベクトル 集中度は、濃度変化の勾配ベクトルの向きに着目し、ある注目点に対して近傍領域 の勾配ベクトルがどの程度集中しているかを評価するものである。  [0040] Next, the extracted 3D MRA image of the blood vessel region is subjected to filter processing using a vector concentration filter as shown in Fig. 4, and the cerebral aneurysm is output from the processed image output by the filter processing. A primary candidate area is detected (step S4). The vector concentration degree filter calculates the vector concentration degree in each botacell, and images and outputs the calculated vector concentration value as the botacell value of the botacell. The vector concentration degree focuses on the direction of the gradient vector of density change and evaluates how much the gradient vector in the neighboring area is concentrated at a certain point of interest.
[0041] 図 5に、脳動脈瘤モデルと血管モデルを示す。  FIG. 5 shows a cerebral aneurysm model and a blood vessel model.
図 5に示すように、脳動脈瘤の場合には線形状の血管上に球状の瘤が存在するた め、勾配ベクトル(図中の矢印は、勾配ベクトルの向きを示す)は瘤の中心へ向かう傾 向がある。一方、血管は線形状であるためそのような傾向は生じない。そのため、脳 動脈瘤モデルの形状に近い領域は他の血管領域に比べてベクトル集中度の値が高 い。従って、ベクトル集中度フィルタによりベクトル集中度が高い領域のみを出力する ことにより、脳動脈瘤の 1次候補領域を検出することができる。  As shown in Fig. 5, in the case of a cerebral aneurysm, there is a spherical aneurysm on a linear blood vessel, so the gradient vector (the arrow in the figure indicates the direction of the gradient vector) goes to the center of the aneurysm. There is a tendency to go. On the other hand, since the blood vessel has a linear shape, such a tendency does not occur. Therefore, the area close to the shape of the cerebral aneurysm model has a higher vector concentration value than other blood vessel areas. Therefore, the primary candidate region of the cerebral aneurysm can be detected by outputting only the region having a high vector concentration level by the vector concentration filter.
[0042] 具体的には、血管領域の抽出画像上に図 4に示すような注目ボタセル Pを走査させ 、注目ボタセル Pを中心とする半径 Rの球の範囲内に、抽出された血管領域が存在し た場合にベクトル集中度を算出する。  [0042] Specifically, when an attention vessel cell P as shown in FIG. 4 is scanned on the extraction image of the blood vessel region, the extracted blood vessel region is within the range of a sphere having a radius R centered on the attention button cell P. If it exists, the vector concentration is calculated.
ベクトル集中度は、下記式 1により算出される。  The vector concentration is calculated by the following formula 1.
[数 1]  [Number 1]
GC(P) . . ■ (1 ) GC ( P ). (1 )
Figure imgf000012_0001
ここで、角度 Θは注目ボタセル Ρから周辺ボタセル Qjまでの方向ベクトルと、周辺ボ クセル Qjにおけるベクトルの方向との間の角度を示し、 Mは演算の対象となった周辺 ボタセル Qjの個数を示して 、る(図 4参照)。
Figure imgf000012_0001
Here, the angle Θ indicates the angle between the direction vector from the target button cell Ρ to the peripheral button cell Qj and the direction of the vector in the peripheral voxel Qj, and M indicates the number of peripheral button cells Qj to be operated. (Refer to Figure 4).
[0043] 血管領域を構成する各ボタセルについてベクトル集中度が求められると、当該べク トル集中度をボタセル値とする、図 6に示すようなフィルタ処理画像が出力される。ベ タトル集中度は 0—1の範囲で出力されるので、図 6では、ベクトル集中度が大きいほ ど( 1に近 、ほど)、白く高濃度に表れるように画像ィ匕して!/、る。  [0043] When the vector concentration degree is obtained for each of the botacells constituting the blood vessel region, a filtered image as shown in Fig. 6 is output with the vector concentration degree as the botacell value. Since the vector concentration is output in the range of 0–1, in Fig. 6, the higher the vector concentration (closer to 1), the more white the image will appear! /
[0044] 次に、フィルタ処理画像について例えば閾値 0. 5を用いて 2値ィ匕する閾値処理が 施され、 1次候補領域の検出が行われる。すなわち、 2値ィ匕により閾値 0. 5より大きい ベクトル集中度を有するボタセル力 なる領域が 1次候補領域として抽出される。図 7 Aは図 6に示すフィルタ処理画像の一部分を示す図である。このフィルタ処理画像を 2値ィ匕することにより、図 7Bに示すような 2値ィ匕画像が得られる。図 7Bに示すように、 2値ィ匕画像にぉ 、て白く表れて!/、る、つまりベクトル集中度が大き!/、画像領域が 1次 候補領域として出力される。  [0044] Next, a threshold processing for binarizing the filtered image using, for example, a threshold value of 0.5 is performed, and a primary candidate region is detected. In other words, a region with a Botacel force having a vector concentration greater than the threshold value 0.5 by the binary key is extracted as a primary candidate region. FIG. 7A is a diagram showing a part of the filtered image shown in FIG. By binarizing this filtered image, a binary key image as shown in FIG. 7B is obtained. As shown in FIG. 7B, the binary image appears in white! /, That is, the vector concentration degree is large! /, And the image area is output as the primary candidate area.
[0045] なお、 2値ィ匕するための閾値は、予め脳動脈瘤の存在が判明している教師画像を 用いて決定しておく。すなわち、教師画像を 2値ィ匕することにより、その存在が既に判 明している脳動脈瘤の画像領域のみを抽出することができるような閾値が求められる また、閾値は p—タイル法等により統計的に解析して得ることとしてもよい。 p—タイ ル法は、濃度ヒストグラムを求め、この濃度ヒストグラムにおいてある一定の面積比率 p %を占めるところの濃度値を閾値として求める方法である。本実施形態では、フィル タ処理画像の濃度ヒストグラムを求め、最高濃度値側力 面積比率 p%を占めるとこ ろの濃度値を閾値として決定する。  [0045] Note that the threshold for binarization is determined using a teacher image in which the presence of a cerebral aneurysm is known in advance. In other words, by thresholding the teacher image, a threshold value is obtained so that only the image area of the cerebral aneurysm whose existence has already been identified can be extracted. It is good also as obtaining by analyzing statistically. In the p-tile method, a density histogram is obtained, and a density value at a certain area ratio p% in the density histogram is obtained as a threshold value. In the present embodiment, the density histogram of the filtered image is obtained, and the density value at the maximum density value side force area ratio p% is determined as a threshold value.
[0046] 次いで、 2次検出を行うため、 1次候補領域の特徴を示す特徴量が算出される (ス テツプ S5)。脳動脈瘤はある一定の大きさを持ち、形状が球状であることから、本実 施形態では特徴量として候補領域の大きさ、球形度、領域内の各ボタセルにおける ベクトル集中度の平均値を算出することとする。しかし、脳動脈瘤を特徴付けることが できるのであればどのような特徴量を用いるかは特に限定せず、例えば各ボタセルの ベクトル集中度の最大値を算出することとしてもよいし、各ボタセルの濃度値の標準 偏差等を算出することとしてもよい。 [0046] Next, in order to perform secondary detection, a feature amount indicating the feature of the primary candidate region is calculated (step S5). Since a cerebral aneurysm has a certain size and a spherical shape, in this embodiment, the size of the candidate region, the sphericity, and the average value of the vector concentration in each botacell in the region are used as feature quantities. It will be calculated. However, if the cerebral aneurysm can be characterized, there is no particular limitation on what kind of feature is used. The maximum value of the vector concentration may be calculated, or the standard deviation of the density value of each botasel may be calculated.
[0047] 大きさの特徴量としては、候補領域を構成する各ボタセルの体積が算出される。こ こでは、処理時間の短縮のため実際の体積ではなぐボタセルの個数を算出してこれ を体積を示す指標値として以後の演算に用いることとする。また、球形度の特徴量は 、図 8に示すように 1次候補領域の体積と同一体積の球を、 1次候補領域の重心と球 の重心が一致するように配置したときに、この球と一致する 1次候補領域部分の体積 と 1次候補領域の全体積との比率から求める。  [0047] As the feature quantity of the size, the volume of each of the botacels constituting the candidate area is calculated. Here, in order to shorten the processing time, the number of botasels that are not the actual volume is calculated and used as the index value indicating the volume in the subsequent calculations. In addition, as shown in FIG. 8, the sphericity feature amount is obtained when a sphere having the same volume as the primary candidate area is arranged so that the centroid of the primary candidate area and the centroid of the sphere coincide with each other. Is obtained from the ratio of the volume of the primary candidate region portion that coincides with the total volume of the primary candidate region.
[0048] 特徴量が算出されると、当該特徴量に基づいて 2次検出が行われる (ステップ S6) 。 2次検出では、脳動脈瘤と正常組織である血管との識別が行われる。識別はルー ルベース法を用いた識別器の例を示す力 これに限らず、例えば人工-ユーラルネ ットワーク、サポートベクトルマシン、判別分析等、識別ができるのであればその手法 は問わない。  [0048] When the feature amount is calculated, secondary detection is performed based on the feature amount (step S6). In the secondary detection, a cerebral aneurysm is distinguished from a blood vessel that is a normal tissue. Discrimination is a force that shows an example of a discriminator using the rule-based method. The method is not limited to this, and any method can be used as long as discrimination is possible, for example, artificial-eural network, support vector machine, discriminant analysis.
[0049] ルールベース法の識別器では、図 9A、図 9Bに示すように、まず大きさに対する球 形度、大きさに対するベクトル集中度の平均の関係を示すプロファイルが作成される 。識別器では予め 2次候補として検出する範囲(図 9A、図 9Bで実線により囲まれた 範囲)が決定されており、識別対象の 1次候補の各特徴量を変量データとして上記プ 口ファイルを作成した際に、その 1次候補の各特徴量の変量データが検出範囲内に 存在すれば真陽性候補、存在しなければ偽陽性候補として識別される。すなわち、 検出範囲内に特徴量の変量データが分布する 1次候補のみが 2次検出される。  In the rule-based classifier, as shown in FIGS. 9A and 9B, first, a profile indicating the relationship between the sphericity with respect to the size and the average of the vector concentration with respect to the size is created. In the classifier, the range to be detected as the secondary candidate (the range surrounded by the solid line in FIGS. 9A and 9B) is determined in advance, and the above profile file is used as the variable data for each feature quantity of the primary candidate to be identified. When it is created, it is identified as a true positive candidate if the variable data of each feature quantity of the primary candidate is within the detection range, and a false positive candidate if it does not exist. In other words, only the primary candidate in which the variable data of the feature quantity is distributed within the detection range is secondarily detected.
[0050] なお、上記検出範囲は、予め脳動脈瘤、或いは正常な血管であると判明している 教師画像を用いて決定される。まず、脳動脈瘤、正常な血管の教師画像からそれぞ れ大きさ、球形度、ベクトル集中度の平均が求められ、これら特徴量から大きさに対 する球形度、大きさに対するベクトル集中度の平均値の関係のプロファイルが作成さ れる。図 9A、図 9Bでは、〇のマーカで示す変量データが真陽性、つまり脳動脈瘤と 判明している教師データであり、參のマーカで示す変量データが偽陽性、つまり血管 と判明している教師データである。このとき、脳動脈瘤と正常な血管では画像の特徴 が異なるため、大きさ、球形度、ベクトル集中度の平均値の特徴量の分布に偏りが生 じる。よって、プロファイルにおいて、全ての脳動脈瘤の教師データが含まれる範囲 が最小限となるように、大きさ、球形度、ベクトル集中度の平均値について 2つの閾値 が決定される。この閾値によって囲まれる領域範囲が検出範囲である。 [0050] Note that the detection range is determined using a teacher image that is previously known to be a cerebral aneurysm or a normal blood vessel. First, the average of size, sphericity, and vector concentration is obtained from cerebral aneurysm and normal blood vessel teacher images, and the sphericity and size of the vector with respect to size are calculated from these features. A profile of the average value relationship is created. In Figure 9A and Figure 9B, the variable data indicated by the ○ marker is true positive, that is, teacher data that is known as a cerebral aneurysm, and the variable data indicated by the 參 marker is false positive, that is, it is known as a blood vessel. Teacher data. At this time, since the image features differ between the cerebral aneurysm and normal blood vessels, there is a bias in the distribution of the feature values of the average values of size, sphericity, and vector concentration. Jiru. Therefore, two threshold values are determined for the average values of size, sphericity, and vector concentration so that the range in which teacher data for all cerebral aneurysms is included in the profile is minimized. A region range surrounded by the threshold is a detection range.
[0051] 図 9A、図 9Bの例では、閾値を示す 4本の点線で囲まれた範囲(その範囲を囲む部 分のみ実線で示す)が検出範囲であり、「大きさ 球形度」、「大きさ ベクトル集中 度の平均」のそれぞれについて検出範囲が決定されている。そして、何れのプロファ ィルにおいても検出範囲内に位置する 1次候補 (その変量データを△のマーカで示 す)が脳動脈瘤候補として 2次検出される。  [0051] In the examples of FIGS. 9A and 9B, the range surrounded by four dotted lines indicating the threshold (only the portion surrounding the range is indicated by a solid line) is the detection range, and “size sphericity”, “ The detection range is determined for each of “average vector density”. In any profile, the primary candidate (the variable data is indicated by a △ marker) located within the detection range is secondarily detected as a cerebral aneurysm candidate.
[0052] このようにして 2次検出が行われると、この 2次検出候補についてさらに 3次検出が 行われる (ステップ S7)。 3次検出では、 3つの特徴量を用いて判別分析が行われる。 判別分析の手法としては、マハラノビスの距離、主成分分析、線形判別関数等、何れ のものも適用可能であるが、 2次検出時の手法とは異なるものを適用するものとする。  [0052] When secondary detection is performed in this manner, tertiary detection is further performed on the secondary detection candidates (step S7). In tertiary detection, discriminant analysis is performed using three feature quantities. As a discriminant analysis method, any of Mahalanobis distance, principal component analysis, linear discriminant function, etc. can be applied, but a method different from the method at the time of secondary detection is applied.
[0053] そして、 3次検出された候補領域が最終的な脳動脈瘤候補であるとしてその検出結 果が出力される (ステップ S8)。  [0053] Then, the detection result is output assuming that the candidate region that has been thirdarily detected is the final cerebral aneurysm candidate (step S8).
図 10に、検出結果として表示部 13に表示出力される検出結果例を示す。 図 10に示すように、表示部 13では、 3次元 MRA画像から作成された MIP画像に おいて、 3次検出された脳動脈瘤の候補領域を指し示すマーカ情報(図 10中の矢印 のマーカ)が表示される。このような表示を行うことにより、脳動脈瘤候補領域と他の 領域とを識別可能としている。 MIP画像は、 3次元 MRA画像データに MIP処理を施 して作成される 2次元画像であり、画像中の構造物を 3次元的に表示することが可能 である。 MIP処理は最大輝度投影法と呼ばれ、ある方向から平行光線によって投影 を行 ヽ、ボタセル中の最大の輝度 (信号値)を投影面に反映させて 3次元的な観察を 可能とする 2次元画像を作成する処理である。  FIG. 10 shows an example of a detection result displayed on the display unit 13 as a detection result. As shown in FIG. 10, in the display unit 13, marker information (marked by an arrow in FIG. 10) indicating a candidate region of the third detected cerebral aneurysm in the MIP image created from the 3D MRA image Is displayed. By performing such display, the cerebral aneurysm candidate region and other regions can be identified. A MIP image is a 2D image created by applying MIP processing to 3D MRA image data, and the structure in the image can be displayed in 3D. MIP processing is called the maximum brightness projection method, and projection is performed with parallel rays from a certain direction, and the maximum brightness (signal value) in the button cell is reflected on the projection surface to enable 3D observation. This is a process for creating an image.
[0054] なお、脳動脈瘤候補領域について算出されたベクトル集中度の平均値等、脳動脈 瘤候補の検出に関する情報を出力し、医師の診断時における参考情報としてもよい 。また、ベクトル集中度の程度に応じてマーカ情報の色を変えることとしてもよい。例 えば、ベクトル集中度が 0. 8以上であれば赤、 0. 7〜0. 8であれば黄、 0. 7〜0. 5 であれば青等のように矢印のマーカの色を変えることにより、ベクトル集中度、つまり 瘤の球状の度合 、が強 、ことを医師は視覚的に容易に把握することができる。 [0054] Information regarding detection of cerebral aneurysm candidates, such as the average value of the vector concentration calculated for the cerebral aneurysm candidate region, may be output and used as reference information at the time of diagnosis by a doctor. Further, the color of the marker information may be changed according to the degree of vector concentration. For example, change the color of the arrow marker to red if the vector concentration is 0.8 or higher, yellow if it is 0.7 to 0.8, blue if it is 0.7 to 0.5, etc. The vector concentration, The doctor can easily grasp visually that the degree of the spherical shape of the aneurysm is strong.
[0055] なお、図 10ではマーカ情報により脳動脈瘤候補の位置を識別可能とする例を示し たが、脳動脈瘤候補領域が他の領域と識別可能に表示されるのであればこのような 表示方法に限らない。例えば、 MIP処理ではなぐボリュームレンダリング法を用いて 3次元的〖こ表示することとしてもよい。ボリュームレンダリング法は、部分領域毎にそ れらのボタセルに色情報と不透明度を与えることにより 3次元表示を行う手法であり、 注目領域につ 、ては不透明度を高く設定し、それ以外の領域につ!、ては低く設定 することによって注目領域を浮き立たせることができるものである。よって、表示時に は領域毎に不透明度、その不透明度に応じた色情報の設定処理を行う。 FIG. 10 shows an example in which the position of a cerebral aneurysm candidate can be identified by marker information. However, if the cerebral aneurysm candidate region is displayed so as to be distinguishable from other regions, such a case is displayed. It is not limited to the display method. For example, the volume rendering method that is not used in MIP processing may be used to display 3D text. The volume rendering method is a method of performing three-dimensional display by giving color information and opacity to each botacell for each partial area. For the attention area, the opacity is set high, and other than that By setting the area low, the area of interest can be raised. Therefore, at the time of display, opacity is set for each area, and color information setting processing corresponding to the opacity is performed.
[0056] また、 MIP画像等ではなぐベクトル集中度フィルタにより得られたフィルタ処理画 像(図 6参照)を用いて検出結果を示すこととしてもよい。このとき、例えばベクトル集 中度が低 、値の領域は青色を、高 、値の領域は赤色を付す等して色分けすることに より、医師が算出されたベクトル集中度を視覚的に把握できるようにしてもよい。さら に、ベクトル集中度によって血管領域の色を変えたフィルタ処理画像を前記 MIP画 像の対応する位置に重ね合わせて表示することとしてもよい。このように、医師の脳 動脈瘤の検出にあたっての参考情報としてベクトル集中度の情報を提供することが 可能である。 [0056] Further, the detection result may be shown using a filter processing image (see FIG. 6) obtained by a vector concentration filter that is not used for MIP images or the like. At this time, for example, the vector concentration degree is low, the value area is colored blue, the high value area is colored red, etc., so that the doctor can visually grasp the calculated vector concentration degree. You may do it. Further, a filtered image in which the color of the blood vessel region is changed according to the vector concentration degree may be displayed superimposed on the corresponding position of the MIP image. In this way, it is possible to provide vector concentration information as reference information for detecting a cerebral aneurysm by a doctor.
[0057] 次に、図 11を参照して血管部位判別処理について説明する。  Next, the blood vessel part determination process will be described with reference to FIG.
この血管部位判別処理は、制御部 11と記憶部 15に記憶された血管部位判別処理 の処理プログラムとの協働により実現されるソフトウェア処理である。血管部位判別処 理では、頭部を撮影した 3次元 MRA画像上に表れる血管像にっ 、てその血管像に 含まれる一又は複数の血管部位を判別する。  This blood vessel part discrimination process is a software process realized in cooperation with the control program 11 and the blood vessel part discrimination process processing program stored in the storage unit 15. In the blood vessel part discrimination process, one or a plurality of blood vessel parts included in the blood vessel image are discriminated from the blood vessel image appearing on the 3D MRA image obtained by photographing the head.
[0058] MRAは MRIの血管撮像方法の 1種である。 MRIでは、被写体の足から頭の方向( この方向を体軸という)に傾斜磁場をかけることによって特定のスライス(断層)にのみ エネルギーを吸収させることが可能である。撮像時にはスライス内の血管では RFパ ルスで血液が飽和された状態となっている力 血管は常に流れがあるため、経時によ り非飽和の血流が流入するとそのスライスにおける信号強度が増加する。 MRAはこ の高信号を画像ィ匕することにより血流のある血管を画像ィ匕する方法である。 [0059] 最初に、血管部位の判別に必要な参照画像について説明する。 [0058] MRA is a kind of MRI blood vessel imaging method. In MRI, energy can be absorbed only in a specific slice (fault) by applying a gradient magnetic field in the direction from the foot of the subject to the head (this direction is called the body axis). When imaging, the blood vessels in the slice are saturated with RF pulses. Since blood vessels always flow, the signal intensity in the slice increases when non-saturated blood flows over time. . MRA is a method for imaging blood vessels with blood flow by imaging this high signal. [0059] First, a reference image necessary for determining a blood vessel site will be described.
参照画像は、図 12Aに示すように、 3次元 MRA画像上の血管像について、一又は 複数の血管部位の位置及びその名称が予め設定されたものである。ここで、血管部 位とは解剖学上の血管の分類をいい、血管部位の位置とは当該血管部位に属する ボタセルの位置をいう。  As shown in FIG. 12A, the reference image is a blood vessel image on the three-dimensional MRA image in which the positions and names of one or a plurality of blood vessel parts are preset. Here, the vascular part refers to the anatomical classification of blood vessels, and the position of the vascular part refers to the position of the botacel belonging to the vascular part.
[0060] 図 12Aでは、血管像に含まれる 8つの血管部位 (前大脳動脈、右中大脳動脈、左 中脳動脈、右内頸動脈、左内頸動脈、右後大脳動脈、左後大脳動脈、脳底動脈)に ついてその位置及び血管部位の名称を設定した例を示している。なお、図 12Aでは 8つの血管部位のうち、 3つの血管部位 (右中大脳動脈、前大脳動脈、脳底動脈)に つ 、てのみ名称を示して 、るが、 8つ全ての血管部位につ!、て名称は設定される。  [0060] In FIG. 12A, eight blood vessel sites included in the blood vessel image (anterior cerebral artery, right middle cerebral artery, left middle cerebral artery, right internal carotid artery, left internal carotid artery, right posterior cerebral artery, left posterior cerebral artery) , The position of the basilar artery) and the name of the blood vessel part are shown. In FIG. 12A, of the eight vascular sites, the names of the three vascular sites (right middle cerebral artery, anterior cerebral artery, and basilar artery) are shown, but all eight vascular sites are shown. The name is set.
[0061] 参照画像 g2は、参照画像用に選択された、図 12Bに示すような頭部 MRA画像 gl の 3次元データ力 作成される。まず、この 3次元データ力 ある間隔毎にアキシャル 画像 (体軸に垂直な面でボタセルを切り出した 2次元断層画像)を作成する。そして、 一のアキシャル画像において医師の指摘に基づき、手動操作により各血管部位に属 するボタセルを指定し、さらにその血管部位の名称を指定する。これを体軸方向に位 置を変えてスライスした各アキシャル画像について繰り返すことにより、 3次元データ を構成する全ボタセルのうち、各血管部位に属するボタセルの位置及び血管部位の 名称を設定することができる。  [0061] The reference image g2 is generated as a three-dimensional data force of the head MRA image gl selected for the reference image as shown in FIG. 12B. First, an axial image (a two-dimensional tomographic image obtained by cutting out a botasel in a plane perpendicular to the body axis) is created at intervals of this three-dimensional data force. Then, based on the doctor's indication in one axial image, the botacell belonging to each vascular site is designated by manual operation, and the name of the vascular site is further designated. By repeating this for each axial image sliced by changing the position in the body axis direction, it is possible to set the position of the vessel cell belonging to each blood vessel part and the name of the blood vessel part among all the botacells constituting the three-dimensional data. it can.
[0062] また、参照画像 g2には血管の屈曲点、終局点、血管部位同士の交差点等の特徴 的な箇所にぉ 、てランドマークのボタセルが設定される。ランドマークは対象画像と 参照画像との位置合わせに使用されるものであるが、詳細な説明は後述する。ランド マークについても医師の指摘に基づく手動操作に応じて設定される。  [0062] In addition, landmark botasels are set in the reference image g2 at characteristic points such as the inflection point of the blood vessel, the end point, and the intersection of the blood vessel portions. The landmark is used for alignment between the target image and the reference image, and will be described in detail later. Landmarks are also set according to manual operations based on doctors' indications.
[0063] 以上のようにして作成された参照画像 g2は、記憶部 15に保存される。  [0063] The reference image g2 created as described above is stored in the storage unit 15.
なお、参照画像 g2の作成は医用画像処理装置 10の制御部 11で行ってもよいし 、外部で作成されたものを記憶部 15に保存することとしてもよい。また、血管像に含ま れる 8つの血管部位を示すため、図 12Aでは各血管部位を識別表示したが、実際の 参照画像 g2は背景が黒 (低信号値)、血管像が白(高信号値)と 2値化された画像で ある。そして、各血管部位に属するボタセルの位置情報、名称の情報及びランドマー クであるボタセルの位置情報は、参照画像に付帯されている、或いは参照画像と対 応付けて別ファイルとして記憶部 15に保存されて 、る。 The reference image g2 may be created by the control unit 11 of the medical image processing apparatus 10, or an externally created image may be stored in the storage unit 15. In addition, in order to show the eight blood vessel parts included in the blood vessel image, each blood vessel part is identified and displayed in FIG. 12A, but the actual reference image g2 has a black background (low signal value) and a white blood vessel image (high signal value). ) And binarized image. Then, the position information, name information, and landmarks of the botasels belonging to each blood vessel site The position information of the button cell that is a key is attached to the reference image or stored in the storage unit 15 as a separate file in association with the reference image.
[0064] 次に、上記参照画像を用いた血管部位判別処理について具体的に説明する。  [0064] Next, a blood vessel part discrimination process using the reference image will be specifically described.
医用画像処理装置 10では、まず判別対象の 3次元 MRA画像 (以下、対象画像と Vヽぅ)に対して制御部 11により正規化処理が施される (ステップ S 11)。  In the medical image processing apparatus 10, a normalization process is first performed by the control unit 11 on the three-dimensional MRA image (hereinafter referred to as the target image and V ヽ ぅ) to be determined (step S11).
対象画像として用いられる MRA画像は、血液の流れを画像ィ匕したものであるため 、被写体や撮影条件によってボタセルが等サイズではない直方体となったり、ボクセ ル値の最大値、最小値にばらつきが生じたりする。そこで、対象画像に関する前提条 件を統一するため、正規化処理を施す。  Since the MRA image used as the target image is an image of the blood flow, depending on the subject and the shooting conditions, the botacell may be a rectangular parallelepiped, or the maximum and minimum values of the voxel may vary. It occurs. Therefore, normalization processing is performed to unify the preconditions regarding the target image.
[0065] 正規化処理では、まずボタセルを構成する全ての辺が等サイズとなるように線形補 間法により対象画像が変換される。次に、対象画像の全てのボタセルのボタセル値 につ 、てヒストグラムが作成され、ヒストグラムの上位 5%以上のボタセル値を 1024、 最小のボタセル値を 0として、対象画像の全てのボタセル値が 0〜1024の階調に線 形変換される。このとき、ボタセル値が高信号値であるほど濃度値 1024に近ぐ低信 号値であるほど濃度値 0に近くなるように変換される。なお、濃度階調のレンジは、 0 〜1024に限らず適宜設定可能である。  In the normalization process, first, the target image is converted by the linear interpolation method so that all the sides constituting the botacell have the same size. Next, a histogram is created for all the botacell values of the target image, and all the botacel values of the target image are 0, with the top 5% or more of the histograms having a value of 1024 for the top 5% or more and 0 for the minimum. Linear conversion to 1024 tones. At this time, the higher the botacell value is, the closer the density value is to 1024. The lower the signal value is, the closer the density value is to 0. The density gradation range is not limited to 0 to 1024 and can be set as appropriate.
[0066] 図 13A、図 13Bに、正規化処理の一例を示す。  FIG. 13A and FIG. 13B show an example of normalization processing.
図 13Aに示す対象画像 g3、図 13Bに示す対象画像 g4はそれぞれ異なる患者を被 写体としたものである。そのため、対象画像 g3から得られたヒストグラム hi (図 13A参 照)と、対象画像 g4から得られたヒストグラム h3 (図 13B参照)とでは、 2つの極大点 があるという特徴が共通する力 そのボタセル値の範囲にかなり相違があり、全体とし てヒストグラム特'性が異なるものとなって ヽることが分かる。このようなヒストグラム特'性 を有する対象画像 g3、g4について上記の正規ィ匕処理を施した後に再度ヒストグラム を作成すると、図 13Aに示すヒストグラム h2、図 13Bに示すヒストグラム h4がそれぞ れ得られる。ヒストグラム h2、 h4力 分力るように、正規化処理によって各対象画像 g 3、 g4のヒストグラム特'性がほぼ同じものとなっている。  The target image g3 shown in FIG. 13A and the target image g4 shown in FIG. 13B are obtained by using different patients as subjects. For this reason, the histogram hi (see Fig. 13A) obtained from the target image g3 and the histogram h3 (see Fig. 13B) obtained from the target image g4 share the common feature that there are two local maxima. It can be seen that there is a considerable difference in the range of values, and the histogram characteristics as a whole are different. If the target images g3 and g4 having such a histogram characteristic are subjected to the above-mentioned regularity processing and then created again, the histogram h2 shown in FIG. 13A and the histogram h4 shown in FIG. 13B are obtained. . Histogram h2 and h4 force As shown by the component force, the histogram characteristics of the target images g3 and g4 are almost the same by the normalization process.
[0067] 正規化処理を終えると、制御部 11では当該正規化された対象画像から血管像が 抽出される (ステップ S12)。 まず、対象画像について閾値処理が施され、 2値化が行われる。一般的に、 MRA 画像では、図 14Aに示すように血管像は白ぐその他の組織部分は黒っぽく表れる ため、 2値ィ匕画像では血管像はその他の領域とは異なる値となる。よって、領域拡張 法により血管像と同程度の信号値を有する領域の抽出を行う。 When the normalization process is completed, the control unit 11 extracts a blood vessel image from the normalized target image (step S12). First, threshold processing is performed on the target image, and binarization is performed. In general, in the MRA image, as shown in Fig. 14A, the blood vessel image appears white and other tissue parts appear black. Therefore, in the binary image, the blood vessel image has a different value from the other regions. Therefore, the region having the same signal value as the blood vessel image is extracted by the region expansion method.
[0068] 領域拡張法では、 2値ィ匕画像を用いて始点となるボタセル (最も白く高濃度値のボ クセル)を決定し、 2値ィ匕処理前の対象画像においてその始点と決定されたボタセル の近傍 26ボタセルを調べ、ある判定条件 (例えば、濃度値が 500以上であること)を 満たす近傍ボタセルを血管像と判断する。そして、この血管像と判断された近傍ボタ セルについても上記と同様の処理を繰り返す。このように、領域を拡張しながら判定 条件を満たすボタセルを順次抽出することにより、血管像の画像領域を抽出すること ができる。 [0068] In the area expansion method, a binary cell is used to determine the starting botasel (the whitest and highest density voxel), and is determined to be the starting point in the target image before the binary image processing. In the vicinity of the Botacel, 26 Botacels are examined, and a neighboring Botacell that satisfies a certain determination condition (for example, a density value of 500 or more) is determined as a blood vessel image. Then, the same processing as described above is repeated for the neighboring buttonacell determined to be the blood vessel image. In this way, the image region of the blood vessel image can be extracted by sequentially extracting the botacell satisfying the determination condition while expanding the region.
[0069] 図 14Bに、血管像を抽出した血管抽出画像 g6を示す。  [0069] FIG. 14B shows a blood vessel extraction image g6 obtained by extracting blood vessel images.
血管抽出画像 g6は、図 14Aに示す正規ィ匕後の対象画像 g5から血管像を抽出し、 当該血管像の領域を白(濃度値 1024)、その他の領域を黒 (濃度値 0)で 2値化した ものである。  The blood vessel extraction image g6 is obtained by extracting a blood vessel image from the target image g5 after normal input shown in FIG. 14A, and the region of the blood vessel image is white (density value 1024) and the other regions are black (density value 0). It is a valuation.
[0070] 次に、制御部 11では、血管抽出画像の血管像の位置と参照画像の血管像の位置 とを略一致させるため、各画像の重心位置を元に位置合わせが行われる (ステップ S 13)。重心位置は、血管像に属する全てのボタセルの重心となるボタセルの位置であ る。  Next, in the control unit 11, in order to make the position of the blood vessel image of the blood vessel extraction image substantially coincide with the position of the blood vessel image of the reference image, alignment is performed based on the position of the center of gravity of each image (step S 13). The position of the center of gravity is the position of the button cell that is the center of gravity of all the button cells belonging to the blood vessel image.
[0071] 図 15A、図 15Bを参照して、具体的に説明する。  Specific description will be given with reference to FIGS. 15A and 15B.
図 15Aは、位置合わせ前の血管抽出画像と参照画像とを重ね合わせた図である。 図 15Aから単に血管抽出画像と参照画像を合わせただけではそれぞれの血管像の 位置が一致して ヽな 、ことが分かる。  FIG. 15A is a diagram in which the blood vessel extraction image and the reference image before alignment are superimposed. From FIG. 15A, it can be seen that the positions of the respective blood vessel images coincide with each other simply by combining the blood vessel extraction image and the reference image.
そこで、制御部 11では、図 15Aに示すように血管抽出画像の重心 P (xl、 yl、 zl) 、参照画像の重心 Q (x2、 y2、 y3)の位置が求められる。次いで、この重心位置 P、 Q がー致するように、血管抽出画像又は参照画像が平行移動される。平行移動により 各重心位置 P、 Qを一致させた結果力 図 15Bに示す図である。図 15B力 血管抽 出画像の血管像と参照画像の血管像の位置が大まかに一致していることが分力る。 [0072] さらに、精度よく位置合わせを行うため、制御部 11では血管抽出画像について剛 体変形が行われる (ステップ S 14)。 Therefore, the control unit 11 obtains the positions of the centroid P (xl, yl, zl) of the blood vessel extraction image and the centroid Q (x2, y2, y3) of the reference image as shown in FIG. 15A. Next, the blood vessel extraction image or the reference image is translated so that the barycentric positions P and Q match. FIG. 15B is a diagram showing the resultant force obtained by matching the center of gravity positions P and Q by translation. Fig. 15B Force The fact that the blood vessel image in the blood vessel extraction image and the blood vessel image in the reference image are roughly coincident with each other is divided. [0072] Further, in order to perform alignment with high accuracy, the control unit 11 performs rigid deformation on the blood vessel extraction image (step S14).
まず、剛体変形の前処理として相互相関係数を用いた対応点の検索が行われる。 これは、位置合わせを行う 2つの画像についてそれぞれ複数の対応点を設定し、こ の 2つの画像において設定された対応点がそれぞれ一致するように一方の画像を剛 体変形するためである。ここでは、参照画像において予め定められているランドマー クのボタセルと、局所的に画像特性が類似する血管抽出画像のボタセルが対応点と して設定される。画像特性の類似性は、血管抽出画像と参照画像について相互相関 係数が求められ、この相互相関係数に基づいて判断される。  First, a corresponding point search using a cross-correlation coefficient is performed as preprocessing for rigid body deformation. This is because a plurality of corresponding points are set for each of the two images to be aligned, and one of the images is rigidly deformed so that the corresponding points set in the two images match each other. Here, a land mark button that is determined in advance in the reference image and a blood vessel extracted image that has locally similar image characteristics are set as corresponding points. The similarity of image characteristics is determined based on the cross-correlation coefficient obtained for the blood vessel extraction image and the reference image.
[0073] 具体的には、図 16Aに示すように、予め参照画像 g7の血管像において設定されて いる 12点のランドマークに対応する対応点が血管抽出画像から検索される。対応点 の検索時には、図 16Bに示すように、血管抽出画像 g8において、参照画像 g7の各 ランドマークと対応する位置のボタセルを開始点とし、血管抽出画像 g8及び参照画 像 g7において当該開始点及びランドマークのボタセルから X軸、 Y軸、 Z軸方向に 10〜+ 10ボタセルの範囲(21 X 21 X 21ボタセルの立方領域)内のボタセルが探索 され、各ボタセルについて、下記式 2により相互相関係数 C (以下、相関値 Cという)が 算出される。  Specifically, as shown in FIG. 16A, corresponding points corresponding to 12 landmarks set in advance in the blood vessel image of the reference image g7 are searched from the blood vessel extraction image. When searching for the corresponding points, as shown in FIG. 16B, in the blood vessel extraction image g8, the start point is the button cell at the position corresponding to each landmark of the reference image g7, and the start point in the blood vessel extraction image g8 and the reference image g7. In addition, a search is made for a button cell in the range of 10 to +10 (in a cubic region of 21 X 21 X 21) in the X-axis, Y-axis, and Z-axis directions and the landmarks. Correlation coefficient C (hereinafter referred to as correlation value C) is calculated.
[0074] [数 2]
Figure imgf000020_0001
上記式 2において A (i, j, k)は参照画像 g7のボタセル位置、 B (i, j, k)は血管抽出 画像 g8のボタセル位置を示す。 UKは探索領域のサイズを示し、 UK= 21 X 21 X 21 である。
[0074] [Equation 2]
Figure imgf000020_0001
In the above equation 2, A (i, j, k) represents the botacell position of the reference image g7, and B (i, j, k) represents the botacell position of the blood vessel extraction image g8. UK indicates the size of the search area, UK = 21 X 21 X 21.
また、ひ、 |8は、それぞれ参照画像 g7、血管抽出画像 g8における探索領域内のボ クセル値の平均値であり、下記式 3、 4により示される。 σ 、 σ は、それぞれ参照画  Further, | 8 is the average value of the voxel values in the search region in the reference image g7 and the blood vessel extraction image g8, and is represented by the following expressions 3 and 4. σ and σ are reference images, respectively.
A B  A B
像 g7、血管抽出画像 g8における探索領域内のボタセル値の標準偏差であり、下記 式 5、 6〖こより示される。 [数 3] This is the standard deviation of the botacel value in the search area in image g7 and blood vessel extraction image g8, and is shown by the following equations (5) and (6). [Equation 3]
« = ^∑∑∑ (iJ,k) (3) «= ^ ∑∑∑ (iJ, k) (3)
丄 k=l j=l i=l  丄 k = l j = l i = l
∑∑∑B( ,k) (4) ∑∑∑B (, k) (4)
k=l j=l i=l  k = l j = l i = l
K J IK J I
∑∑{Α(^) - ο;  ∑∑ {Α (^)-ο;
k=l j=l i=l  k = l j = l i = l
σ (5)  σ (5)
UK  UK
K J I K J I
∑∑∑{B(i,j,k) - ^}2 ∑∑∑ {B (i, j, k)-^} 2
k=l j=l i=l  k = l j = l i = l
σ (6)  σ (6)
UK  UK
[0076] 相関値 Cは 1. 0〜1. 0の値域を持ち、最大値 1. 0に近いほど、参照画像 g7と血 管抽出画像 g8の画像特性が類似して!/、ることを示す。 [0076] The correlation value C has a value range of 1.0 to 1.0, and the closer to the maximum value 1.0, the more similar the image characteristics of the reference image g7 and the blood vessel extraction image g8! / Show.
そこで、最も大きな相関値 Cをとるボタセルの位置力 参照画像 g7のランドマークに 対応する血管抽出画像 g8の対応点として設定される。  Therefore, the position force of the botel cell having the largest correlation value C is set as the corresponding point of the blood vessel extraction image g8 corresponding to the landmark of the reference image g7.
[0077] 対応点が設定されると、制御部 11ではこの対応点に基づき、血管抽出画像 g8に対 して剛体変形を施すことにより、血管抽出画像 g8の血管像と参照画像 g7の血管像と の位置合わせが行われる。剛体変形は、ァフィン変換の一つであり、回転、平行移動 により座標変換を行うものである。位置合わせは、最小二乗法を用いた剛体変形を 複数回繰り返す ICP (Iterative Closest Point)アルゴリズムにより血管抽出画像 g8の 対応点が参照画像 g7のランドマークに一致するように行われる。このアルゴリズムに よれば、剛体変形を行う毎に、参照画像のランドマークと血管抽出画像の対応点に おける距離の最小二乗誤差を算出し、当該最小二乗誤差がある閾値を超える等の 終了条件を満たすまで剛体変形が繰り返される。なお、剛体変形をァフィン変換 (拡 大縮小、回転、並行移動による座標変換)に代えることとしてもよい。  [0077] When the corresponding point is set, the control unit 11 performs a rigid body deformation on the blood vessel extraction image g8 based on the corresponding point, and thereby the blood vessel image of the blood vessel extraction image g8 and the blood vessel image of the reference image g7. And are aligned. Rigid body deformation is one of the affine transformations, in which coordinate transformation is performed by rotation and translation. The alignment is performed so that the corresponding point of the blood vessel extraction image g8 matches the landmark of the reference image g7 by an ICP (Iterative Closest Point) algorithm that repeats rigid body deformation using the least squares method multiple times. According to this algorithm, every time a rigid body is deformed, the least square error of the distance at the corresponding point between the landmark of the reference image and the blood vessel extraction image is calculated, and the end condition such as the least square error exceeding a certain threshold is set. The rigid body deformation is repeated until it is satisfied. Note that rigid body deformation may be replaced with affine transformation (coordinate transformation by scaling, rotation, or parallel movement).
[0078] 次いで、参照画像に基づき、剛体変形により位置合わせされた血管抽出画像の血 管像に含まれる各血管部位が制御部 11にお 、て判別される (ステップ S 15)。 [0078] Next, based on the reference image, blood in the blood vessel extraction image aligned by rigid body deformation Each blood vessel site included in the tube image is discriminated by the control unit 11 (step S15).
まず、参照画像の各血管部位に属するボタセル全てを対象に(これを対象ボタセル という)、血管抽出画像におけるある注目ボタセルとのユークリッド距離の 2乗が求め られる。そして、その求めたユークリッド距離が最短となる対象ボタセルが属する血管 部位が、注目ボタセルが属する血管部位であると判断される。またこのとき、対象ボタ セルに設定されて 、る血管部位の名称から、注目ボタセルの血管部位の名称が判 断される。  First, the square of the Euclidean distance from a certain target vessel cell in the blood vessel extraction image is obtained for all the vessel cells belonging to each blood vessel part of the reference image (this is called the target button cell). Then, it is determined that the blood vessel part to which the target botacell having the shortest Euclidean distance belongs is the blood vessel part to which the target botacell belongs. At this time, the name of the blood vessel part of the target button cell is determined from the name of the blood vessel part set in the target button cell.
[0079] 血管抽出画像の血管像を構成する全てのボタセルに対し、注目ボタセルを設定し て上記の処理を行い、全ボタセルについて対応する血管部位が判別されると、その 判別された血管部位に属するボタセルの位置、血管部位の名称を示す血管部位情 報が制御部 11により生成され、対象画像に付帯される (ステップ S16)。例えば、位 置 (x3、 y3、 z3)のボタセルは前大脳動脈の血管部位であると判別された場合、その ボタセル位置「(x3, y3、 z3)」のボタセルは血管名称「前大脳動脈」の血管部位であ ることを示す血管部位情報が対象画像のヘッダ領域に書き込まれる等して付帯され ることとなる。  [0079] When the above processing is performed for all the botasels constituting the blood vessel image of the blood vessel extraction image and the above-described processing is performed, and the corresponding blood vessel portions are determined for all the botasels, Blood vessel part information indicating the position of the botacel to which it belongs and the name of the blood vessel part are generated by the control unit 11 and attached to the target image (step S16). For example, if it is determined that the button cell at the position (x3, y3, z3) is a blood vessel part of the anterior cerebral artery, the button cell at the position “(x3, y3, z3)” has the blood vessel name “anterior cerebral artery”. The blood vessel part information indicating that this is a blood vessel part is appended to the header area of the target image.
[0080] 図 17A、図 17Bに、血管部位の判別結果を示す。  FIG. 17A and FIG. 17B show the results of determining the blood vessel site.
図 17Aに示す血管抽出画像 g9と図 17Bに示す血管抽出画像 gl lは、それぞれ異 なる被写体カゝら得られた画像であり、図 17Aに示す画像 glO、図 17Bに示す画像 gl 2はそれぞれ血管抽出画像 g9、 gl lから各血管部位が判別され、その血管部位毎に 色を変えて識別表示した画像である。画像 glO、 gl2から、剛体変形による位置合わ せを行うことにより、画像 glO、 gl2の血管像が異なる形態 (血管の位置、大きさ、延 在方向等)であるにも拘わらず、同じように血管部位を識別できていることが分かる。  The blood vessel extraction image g9 shown in FIG. 17A and the blood vessel extraction image gl l shown in FIG. 17B are images obtained from different subjects, respectively, and the image glO shown in FIG. 17A and the image gl 2 shown in FIG. Each blood vessel part is discriminated from the blood vessel extraction images g9 and gl l, and is an image that is identified and displayed by changing the color for each blood vessel part. By aligning the images glO and gl2 by rigid body deformation, the blood vessel images of the images glO and gl2 are the same regardless of their different forms (blood vessel position, size, extension direction, etc.). It can be seen that the blood vessel site can be identified.
[0081] 以上が、対象画像における血管部位を判別し、その血管部位情報を付帯するまで の流れである。  The above is a flow from determining a blood vessel part in the target image to attaching the blood vessel part information.
そして、このような対象画像について操作部 12を介して表示指示操作がなされると 、制御部 11により対象画像に MIP処理が施されて MIP画像が生成され、表示部 13 上に表示される。以下、 MIP画像の表示を MIP表示という。  When a display instruction operation is performed on such a target image via the operation unit 12, the control unit 11 performs MIP processing on the target image to generate a MIP image and displays it on the display unit 13. Hereinafter, the display of MIP images is referred to as MIP display.
[0082] MIP法はある方向力も平行光線によって投影を行うとともに、この投影線上にある ボタセル中の最大の輝度 (ボタセル値)を投影面に反映させて 2次元画像を作成する 方法である。この投影方向は医師が観察を所望する視線方向となる。ここでは、医師 が自由にその観察方向を操作することができる構成とし、操作部 12を介して指示操 作された観察方向に応じて制御部 11により対象画像から MIP画像が作成され、表 示部 13に表示されるものとする。 [0082] In the MIP method, a certain directional force is projected by parallel rays and is on this projection line. This is a method of creating a two-dimensional image by reflecting the maximum luminance (botacel value) in the botacel on the projection plane. This projection direction is the line-of-sight direction that the doctor desires to observe. Here, the doctor can freely operate the observation direction, and the control unit 11 creates a MIP image from the target image according to the observation direction instructed through the operation unit 12, and displays the MIP image. It shall be displayed in part 13.
[0083] 対象画像が MIP表示された状態においてさらに血管部位の識別表示の指示操作 力 Sなされると、制御部 11では対象画像に付帯されている血管部位情報が参照され、 この血管部位情報に基づき、 MIP画像中の血管像において各血管部位が識別可能 となるように表示制御が行われる (ステップ S 17)。  [0083] In the state where the target image is displayed in MIP, when the instruction operation force S for identifying and displaying the vascular part is further performed, the control unit 11 refers to the vascular part information attached to the target image, and Based on this, display control is performed so that each blood vessel part can be identified in the blood vessel image in the MIP image (step S17).
[0084] 例えば、血管部位情報に基づ!/、て各血管部位のボタセル位置及び名称が判別さ れると、制御部 11では、 MIP画像において前大脳動脈の血管部位に属するボクセ ルには青色、脳底動脈の血管部位に属するボタセルには緑色等、血管部位毎に判 別された位置にあるボタセルに血管部位毎の色が設定される。そして、対象画像の MIP画像に当該設定色を反映させる。さらにその血管部位の名称を指し示すァノテ ーシヨン画像が作成されて MIP画像の対応する血管部位に合成される。  [0084] For example, when the position and name of the vessel cell of each blood vessel part are determined based on the blood vessel part information !, the control unit 11 displays blue for the voxels belonging to the blood vessel part of the anterior cerebral artery in the MIP image. In addition, the color of each blood vessel part is set to the button cell located at the position determined for each blood vessel part, such as green for the botacel belonging to the blood vessel part of the basilar artery. Then, the set color is reflected in the MIP image of the target image. Furthermore, an annotation image indicating the name of the blood vessel part is created and synthesized with the corresponding blood vessel part of the MIP image.
[0085] 図 18に、識別表示例を示す。  FIG. 18 shows an example of identification display.
図 18において MIP画像 gl3は対象画像が頭部上方向力 MIP表示されたもので ある。この MIP画像 gl 3が表示された状態で各血管部位の識別表示が指示操作さ れると、識別表示画像 g 14が表示される。識別表示画像 gl4は、血管抽出画像にお V、て 8種類の血管部位にそれぞれ異なる色を付す等して、各血管部位を識別表示し たものである。  In Fig. 18, the MIP image gl3 is the target image displayed with the head upward force MIP. When the identification display of each blood vessel region is instructed with the MIP image gl 3 displayed, the identification display image g 14 is displayed. The identification display image gl4 is obtained by identifying and displaying each blood vessel part by, for example, assigning different colors to the eight kinds of blood vessel parts in the blood vessel extraction image.
識別表示画像 gl4において、医師によりある血管部位が選択操作されると、制御部 11の表示制御により当該選択された血管部位に関連づけて「脳底動脈」等の血管部 位の名称を示すァノテーシヨン mが表示される。  In the identification display image gl4, when a doctor selects a blood vessel part, an annotation indicating the name of the blood vessel part such as “basal artery” is displayed in association with the selected blood vessel part by the display control of the control unit 11. Is displayed.
[0086] 上記頭部上方向の MIP画像 gl3に対して、側面方向からの MIP表示が指示される と、制御部 11により側面方向に応じた MIP画像 gl5が作成されて、表示される。 MIP 画像 gl5では、手前側の血管像と後方側の血管像とが重なり合っており、観察しづら い表示となっている。このような MIP画像 gl5においても、血管部位の識別表示が可 能である。血管部位情報を参照することにより、何れのボタセルが何れの血管部位に 該当するのかを判別することができるため、 MIP表示の観察方向の変動によらず、識 別表示すべきボタセルを特定できるからである。 [0086] When MIP display from the side direction is instructed to the MIP image gl3 in the head upward direction, the MIP image gl5 corresponding to the side direction is created by the control unit 11 and displayed. In the MIP image gl5, the blood vessel image on the front side overlaps the blood vessel image on the rear side, making it difficult to observe. Even in such MIP image gl5, it is possible to identify and display blood vessel sites. Noh. By referring to the blood vessel part information, it is possible to determine which botacell corresponds to which blood vessel part, and therefore, it is possible to identify the botacell to be identified and displayed regardless of the change in the observation direction of the MIP display. is there.
[0087] MIP画像 g 15に対応する識別表示画像は画像 g 16である。この識別表示画像 g 16 においては、何れかの血管部位のみを抽出して表示することが可能である。  [0087] The identification display image corresponding to the MIP image g15 is the image g16. In this identification display image g 16, it is possible to extract and display only one of the blood vessel sites.
制御部 11では、操作部 12を介して何れか一の血管部位が選択操作されると、当 該選択された血管部位のボタセルの輝度のみを投影させた MIP画像、つまり対象画 像の MIP画像 gl5から選択された血管部位のみが抽出された血管選択画像 gl7が 表示される。血管選択画像 gl7では、選択された血管部位のみ MIP表示され、その 他の血管部位が非表示とされているため、医師は関心のある血管部位のみに注目し て観察することができる。  When any one blood vessel part is selected and operated via the operation part 12 in the control unit 11, the MIP image in which only the luminance of the selected vessel vessel is projected, that is, the MIP image of the target image. A blood vessel selection image gl7 in which only the blood vessel site selected from gl5 is extracted is displayed. In the blood vessel selection image gl7, only the selected blood vessel part is displayed in MIP and the other blood vessel parts are not displayed, so that the doctor can observe only the blood vessel part of interest.
[0088] また、これらの表示画像 g13〜gl7は、同一画面上に並べて表示することとしてもよ いし、一画面一画像として切替表示することとしてもよい。前者の場合、 MIP表示画 像 gl3と識別表示画像 gl4、血管選択画像 gl7等とを比較観察することが可能となる し、後者の場合は全画面表示で各画像 gl3〜gl7を観察することができ、詳細を観 察しやすくなる。 [0088] In addition, these display images g 13~Gl7 also by stone as be displayed side by side on the same screen, it is also possible to switch the display as one screen first image. In the former case, the MIP display image gl3, the identification display image gl4, and the blood vessel selection image gl7 can be compared and observed. In the latter case, each image gl3 to gl7 can be observed in full screen display. This makes it easier to observe details.
実施例  Example
[0089] 以下、検出処理の実施例を示す。  Hereinafter, examples of the detection process will be described.
20人の患者についての 3次元 MRA画像データを得た。これらの画像データは、そ のマトリクスサイズ力 S256 X 256であり、空間分解能は 0. 625〜0. 78mm、スライス 厚は 0. 5〜1. 2mmである。これらの画像データには 7つの未破裂脳動脈瘤が含ま れていることが分力つている。未破裂脳動脈瘤は、経験のある脳神経外科医により決 定されたものである。  Three-dimensional MRA image data for 20 patients were obtained. These image data have a matrix size force S256 X 256, a spatial resolution of 0.625 to 0.78 mm, and a slice thickness of 0.5 to 1.2 mm. These image data are divided into seven unruptured cerebral aneurysms. An unruptured cerebral aneurysm has been determined by an experienced neurosurgeon.
[0090] まず、これら 3次元 MRA画像データを、線形補完法を用いて等ボタセルサイズの 画像データに変換し、正規ィ匕を行った。この正規化により、全ての画像データは、マ トリタスサイズ力 S400 X 400 X 200、空間分解能が 0. 5mmの等ボタセルの画像デー タとなった。次に、階調変換処理を施し、濃度階調を 0〜1024に線形変換した後、 医師により決定された脳動脈瘤の領域 (真陽性)について大きさ (体積)、球形度、ベ タトル集中度の各特徴量が算出されるとともに、正常な血管領域 (偽陽性)について も同様の特徴量が算出される。そして、これら特徴量を 2次検出の識別器であるルー ルベース法の検出範囲の決定、判別分析の教師データとして使用した。同様に、 3 次検出時の識別器における教師画像として用 、て識別器の学習を行つた。 [0090] First, these three-dimensional MRA image data were converted to image data having an equal botacell size using a linear interpolation method, and normalization was performed. As a result of this normalization, all image data became equal-botacel image data with a matrix size power of S400 X 400 X 200 and a spatial resolution of 0.5 mm. Next, after performing gradation conversion processing and linearly converting the density gradation from 0 to 1024, the size (volume), sphericity, and velocity of the cerebral aneurysm region (true positive) determined by the doctor are determined. Each feature amount of the degree of concentration of tuttle is calculated, and the same feature amount is calculated for a normal blood vessel region (false positive). These feature quantities were used to determine the detection range of the rule-based method, which is a discriminator for secondary detection, and as teacher data for discriminant analysis. Similarly, the discriminator was trained as a teacher image in the discriminator at the third detection.
[0091] 上記のように構築された識別器により、 20症例の 3次元 MRA画像データについて 図 7の検出処理を行ったところ、 1症例あたり 1. 85個の偽陽性候補が含まれていた 1S 脳動脈瘤の検出正答率は 100%であり、精度良く検出できることが確認された。  [0091] Using the discriminator constructed as described above, the detection process of Fig. 7 was performed on 3D MRA image data of 20 cases, and 1.85 false positive candidates were included per case. The correct answer rate for detecting cerebral aneurysms was 100%, confirming that it could be detected with high accuracy.
[0092] 以上のように、第 1実施形態によれば、ベクトル集中度フィルタを用いて中心部分に 勾配ベクトルが集中する特徴を有する脳動脈瘤候補を精度良く検出し、その検出情 報を医師に提供することができる。よって、医師の読影作業時の疲労や見落としを防 止することができ、診断精度の向上が期待される。  [0092] As described above, according to the first embodiment, a cerebral aneurysm candidate having a characteristic that a gradient vector concentrates in the central portion is accurately detected using a vector concentration filter, and the detected information is obtained by a doctor. Can be provided. Therefore, it is possible to prevent fatigue and oversight during doctor's interpretation work, and it is expected to improve diagnosis accuracy.
[0093] また、脳動脈瘤候補領域は他の領域と識別可能に表示するので、画像上にぉ 、て 医師が検出結果を容易に把握することができ、読影作業の円滑ィ匕を図ることができる [0093] In addition, since the cerebral aneurysm candidate region is displayed so as to be distinguishable from other regions, the doctor can easily grasp the detection result on the image and facilitate the interpretation work. Can
[0094] また、ベクトル集中度フィルタは、 MRA画像そのものではなく血管領域を抽出した 抽出画像に適用するため、フィルタ処理に要する処理時間の短縮化を図ることがで きる。 [0094] Further, since the vector concentration filter is applied not to the MRA image itself but to the extracted image obtained by extracting the blood vessel region, it is possible to shorten the processing time required for the filter processing.
[0095] なお、上記実施形態は、本発明を適用した好適な一例であり、これに限定されない 例えば、上記説明では、 3次元 MRA画像を用いて脳動脈瘤候補を検出していた 1S 2次元 MRA画像を用いて検出を行うこととしてもよい。この場合、脳動脈瘤候補 の大きさはピクセル数、球形度は円形度として 2次元の特徴量を算出することとなる。  Note that the above embodiment is a preferred example to which the present invention is applied, and the present invention is not limited to this. For example, in the above description, a cerebral aneurysm candidate is detected using a 3D MRA image. Detection may be performed using an MRA image. In this case, the size of the cerebral aneurysm candidate is the number of pixels, and the sphericity is a circularity, and a two-dimensional feature value is calculated.
[0096] また、 MRA画像を用いて動脈瘤候補の検出を行う例を説明したが、造影剤を用い て血管領域を撮影した造影 MRA画像等、他の撮影方法による MRI画像を用いても よいし、 CTA (Computed Tomography Angiography)、 D¾A (Digital Subtraction Angi ography)等、他の撮影装置により血管領域が撮影された画像を用いることとしてもよ い。また、検出対象も動脈瘤のみならず、球状の瘤の特徴を有する病変部であれば 本発明を適用して検出することが可能である。 [0097] また、上記実施形態によれば、画像上の血管像に含まれる 8つの血管部位の位置 及び名称を予め定めた参照画像と対象画像とを、その血管像について位置合わせ することにより、対象画像上の血管像に含まれる各血管部位の位置及び名称を判別 する。そして、この位置及び名称の情報を血管部位情報として対象画像に付帯させ るので、対象画像の MIP表示を行う際には当該血管部位情報に基づき、各血管部 位を容易に判別することができ、各血管部位の識別表示が可能となる。従って、医師 は対象画像における特定の血管部位に注目して対象画像を観察することが可能とな り、読影効率の向上を図ることができる。 [0096] Although an example of detecting an aneurysm candidate using an MRA image has been described, an MRI image obtained by other imaging methods such as a contrast MRA image obtained by imaging a blood vessel region using a contrast agent may be used. However, it is also possible to use an image in which a blood vessel region is imaged by another imaging device such as CTA (Computed Tomography Angiography) or D¾A (Digital Subtraction Angiography). In addition, the detection target can be detected by applying the present invention as long as it is not only an aneurysm but also a lesion having a spherical aneurysm. [0097] Further, according to the above-described embodiment, by aligning the reference image and the target image with predetermined positions and names of the eight blood vessel portions included in the blood vessel image on the image with respect to the blood vessel image, The position and name of each blood vessel part included in the blood vessel image on the target image is determined. Since the position and name information is attached to the target image as blood vessel part information, when performing MIP display of the target image, each blood vessel part can be easily determined based on the blood vessel part information. The identification display of each blood vessel site is possible. Therefore, the doctor can observe the target image while paying attention to a specific blood vessel site in the target image, and can improve the interpretation efficiency.
[0098] MIP表示を行う際、その MIP表示を行う方向によっては複数の血管部位が重複し 、注目したい血管部位に対する観察が困難となる場合がある。  [0098] When performing MIP display, depending on the direction in which the MIP display is performed, a plurality of blood vessel sites may overlap, making it difficult to observe the blood vessel site to be noticed.
し力しながら、本実施形態によれば各血管部位の識別表示が可能となるため、医 師は各血管部位につ!、ての位置や名称を容易に特定することができ、読影作業の 効率ィ匕を図ることができる。  However, according to the present embodiment, since each blood vessel part can be identified and displayed, the doctor can easily identify the position and name of each blood vessel part, and can perform the interpretation work. Efficiency can be improved.
[0099] また、識別表示された各血管部位のうち、任意の血管部位の選択操作に応じて、 当該選択された血管部位のみを抽出して表示したターゲット MIP画像を作成し、表 示するので、医師は注目したい血管部位のみを観察することが可能となる。よって、 複数の血管部位の重複を解消することができ、動脈瘤の多発部位等、ある血管部位 に特定して観察を行うことができる。  [0099] In addition, a target MIP image in which only the selected blood vessel part is extracted and displayed in response to a selection operation of an arbitrary blood vessel part among the identified blood vessel parts is generated and displayed. The doctor can observe only the blood vessel site to be noticed. Therefore, duplication of a plurality of blood vessel sites can be eliminated, and observation can be performed by identifying a specific blood vessel site such as a site where multiple aneurysms occur.
[0100] また、対象画像と参照画像の血管像の位置合わせを行う際には、まず重心位置に 基づいて大まかな位置合わせを行った後、相互相関係数により対象画像に剛体変 形を施して対象画像の血管像の特徴点を参照画像の血管像の対応する特徴点に一 致するように細かな位置合わせを行う。  [0100] Also, when aligning the blood vessel images of the target image and the reference image, first, rough alignment is performed based on the position of the center of gravity, and then the target image is subjected to rigid body deformation using the cross-correlation coefficient. Then, fine alignment is performed so that the feature point of the blood vessel image of the target image matches the corresponding feature point of the blood vessel image of the reference image.
[0101] 異なる被写体 (患者)であっても主要な血管部位の形態 (血管の長さゃ延在方向、 太さ等)は概ね共通するが、主要でない細い血管部位についてはその形態に個体 差が生じるため、血管部位の形態は被写体によって様々なものとなる。  [0101] The morphology of the major blood vessel sites (the length of the blood vessels, the direction of extension, the thickness, etc.) is generally the same for different subjects (patients), but the shape of the minor blood vessels that are not major varies depending on the individual. Therefore, the shape of the blood vessel part varies depending on the subject.
し力しながら、本実施形態のように最終的に剛体変形により血管像の主な屈曲点等 の特徴点の位置を合わせこむことにより、対象画像中の各血管部位がどのような形態 を呈していても、精度よく参照画像の血管部位と対応させることができる。従って、被 写体によらず一律に血管部位を判別することができ、汎用性が高 、。 However, as shown in this embodiment, by aligning the positions of the feature points such as the main inflection points of the blood vessel image by the rigid body deformation finally, what form each blood vessel part in the target image takes on Even with this, it is possible to accurately correspond to the blood vessel portion of the reference image. Therefore, the covered The blood vessel part can be identified uniformly regardless of the subject, and is highly versatile.
[0102] さらに、位置合わせは 2段階 (重心位置に基づくものと、剛体変形によるもの)で行う ので、血管部位を判別する判別精度を向上させることができる。また、剛体変形の前 に重心位置により位置合わせを行っておくことにより、剛体変形に係る処理時間の短 縮ィ匕を図ることができ、処理効率が良い。  [0102] Furthermore, since the alignment is performed in two stages (based on the position of the center of gravity and based on rigid body deformation), it is possible to improve the determination accuracy for determining the blood vessel site. In addition, by performing alignment based on the position of the center of gravity before the deformation of the rigid body, the processing time for the rigid body deformation can be shortened, and the processing efficiency is good.
[0103] なお、上述した実施形態は、本発明を適用した好適な一例であり、これに限定され ない。  [0103] The above-described embodiment is a preferred example to which the present invention is applied, and the present invention is not limited to this.
例えば、 MRA画像を用いた例を説明したが、造影剤を用いて血管を撮影した造影 MRA画像等、他の撮影方法による MRI画像を用いてもよい。また、 CTA (Compute d Tomograpny Angiography)、 DSA (Digital subtraction Angiographyノ等、他の撮 影装置により血管が撮影された画像を用いることとしてもよい。  For example, although an example using an MRA image has been described, an MRI image obtained by another imaging method such as a contrast MRA image obtained by imaging a blood vessel using a contrast agent may be used. In addition, an image obtained by imaging a blood vessel with another imaging apparatus such as CTA (Computed Tomograpny Angiography) or DSA (Digital subtraction Angiography) may be used.
[0104] 〈第 2実施形態〉 <Second Embodiment>
第 2実施形態では、 GCフィルタバンクによる検出処理の例を説明する。 第 2実施形態に係る医用画像処理装置は、第 1実施形態に係る医用画像処理装 置 10と同一構成であり、動作が異なるのみである。よって、第 1実施形態に係る医用 画像処理装置 10 (図 1参照)と同一構成部分には同一の符号を付し、以下、第 2実 施形態の医用画像処理装置 10の動作について説明する。  In the second embodiment, an example of detection processing by the GC filter bank will be described. The medical image processing apparatus according to the second embodiment has the same configuration as the medical image processing apparatus 10 according to the first embodiment, and only the operation is different. Therefore, the same components as those in the medical image processing apparatus 10 (see FIG. 1) according to the first embodiment are denoted by the same reference numerals, and the operation of the medical image processing apparatus 10 in the second embodiment will be described below.
[0105] 図 19は、第 2実施形態に係る検出処理を示すフローチャートである。 FIG. 19 is a flowchart showing the detection process according to the second embodiment.
図 19に示す検出処理では、まず MRAの 3次元画像データの入力が行われ (ステツ プ S101)、この 3次元画像データに対して前処理が行われる (ステップ S102)。前処 理が終了すると、 3次元画像データ力 血管の画像領域が抽出される (ステップ S 10 3)。なお、このステップ S101〜S103は、第 1実施形態において図 2を参照して説明 したステップ Sl〜3と同様の処理であるため、ここでは詳細な説明を省略する。  In the detection process shown in FIG. 19, the MRA 3D image data is first input (step S101), and the 3D image data is preprocessed (step S102). When the preprocessing is completed, the image region of the 3D image data force blood vessel is extracted (step S 103). Since steps S101 to S103 are the same processing as steps Sl to 3 described with reference to FIG. 2 in the first embodiment, detailed description thereof is omitted here.
[0106] 次に、抽出された血管領域の 3次元 MRA画像を用いて GCフィルタバンクにより、 脳動脈瘤の 1次候補領域の検出が行われる (ステップ S104)。 Next, the primary candidate region of the cerebral aneurysm is detected by the GC filter bank using the extracted three-dimensional MRA image of the blood vessel region (step S104).
以下、 GCフィルタバンクによる検出方法について説明する。  The detection method using the GC filter bank is described below.
GCフィルタバンクは、各種フィルタ処理を組み合わせたものであり、分析バンクと再 構成バンクに分かれている。分析バンクは原画像 (血管領域の 3次元 MRA画像)の 多重解像度解析を行って解像度レベルの異なる画像 (以下、部分画像と!、う)を作成 し、この部分画像から重み画像を作成するものである。一方、再構成バンクは重み画 像によって各部分画像を重み付けした後、この重み付けられた部分画像力 原画像 の再構成を行うものである。 The GC filter bank is a combination of various filter processes and is divided into an analysis bank and a reconstruction bank. The analysis bank is the original image (3D MRA image of the blood vessel region) Multi-resolution analysis is performed to create images with different resolution levels (hereinafter referred to as partial images!) And weight images are created from these partial images. On the other hand, the reconstruction bank weights each partial image with a weighted image, and then reconstructs the weighted partial image force original image.
[0107] 図 20は、分析バンクを示す図である。  FIG. 20 shows an analysis bank.
図 20に示すように、分析バンクでは血管領域の 3次元 MRA画像を原画像 Sとして  As shown in Figure 20, the analysis bank uses a 3D MRA image of the blood vessel region as the original image S
0 フィルタバンク A (zj)においてフィルタ処理が行われ、各解像度レベル jの部分画像 が順次作成される。ここでは、解像度レベル 1〜3の部分画像を作成する例を説明す る。 0 Filter processing is performed in filter bank A (z j ), and partial images of each resolution level j are sequentially created. Here, an example of creating partial images with resolution levels 1 to 3 will be described.
フィルタバンク A (zj)は、図 21に示すようにフィルタ H (zj)、 G (zj)によるフィルタ処理 を経て、画像 S を部分画像 S、 Wz、 Wy、 Wxに分解するものである。ここで、 Sは Filter bank A (z j ) decomposes image S into partial images S, Wz, Wy, and Wx through filter processing with filters H (z j ) and G (z j ) as shown in FIG. is there. Where S is
]- 1 ] ] ] j ] x、 y、 zの各方向にお!、て S に平滑化フィルタ H (zj)を施したものである。平滑化フ ]-1]]] j] A smoothing filter H (z j ) is applied to each of x, y, and z! Smoothing
]- 1  ]-1
ィルタ H (zj)は下記式 7により表される。 The filter H (z j ) is expressed by the following formula 7.
画 z- 2 · · · (7)Drawing z- 2 (7)
Figure imgf000028_0001
なお、上記式 7において用いられている zは、 z変換を示すものである(以下、フィル タを示す式 8〜10にお!/ヽて同じ)。
Figure imgf000028_0001
Note that z used in Equation 7 above indicates z conversion (hereinafter, the same applies to Equations 8 to 10 indicating filters!).
[0108] 部分画像 Wz、 Wy、 Wxは、画像 S の x、 y、 z方向のそれぞれにつ!/、てフィルタ  [0108] Partial images Wz, Wy, and Wx are respectively filtered in the x, y, and z directions of image S!
j ] ] ]- ι  j]]]-ι
G (zj)が適用されて得られた 1階差分画像である。フィルタ G (zj)は下記式 8により表 される。 This is the first-order difference image obtained by applying G (z j ). Filter G (z j ) is expressed by Equation 8 below.
[数 5]  [Equation 5]
G(z) = z _ z一1 ■ ■ ■ (8) 部分画像 Wz、 Wy、 Wxは、解像度レベルが小さ!/ヽ (j = 1)ほど、細かな変動に対 する検出性が高い。一方、解像度レベルが大きい (j = 3)ほど粗い変動に対する検出 性が高くなる。従って、解像度レベルが小さい部分画像 Wz、 Wy、 Wxは大きさが小 さい動脈瘤の検出に適し、解像度レベルが大きい部分画像 Wz、 Wy、 Wxは大きい 動脈瘤の検出に適している。 G (z) = z _ z 1 1 ■ ■ ■ (8) Partial images Wz, Wy, and Wx are more sensitive to fine fluctuations as the resolution level is smaller! / ヽ (j = 1). On the other hand, the higher the resolution level (j = 3), the higher the sensitivity to coarse fluctuations. Therefore, partial images Wz, Wy, Wx with small resolution levels are suitable for detecting small aneurysms, and partial images Wz, Wy, Wx with large resolution levels are large. Suitable for detecting aneurysms.
[0109] Sについてはさらに解像度レベル j + 1のフィルタバンク A (zj+1)によるフィルタ処理 j [0109] For S, the filter processing by the filter bank A (z j + 1 ) at the resolution level j + 1 j
が行われることとなる。このように、繰り返しフィルタ処理が行われることにより、解像度 レベル j (j = 1〜3)毎に部分画像 S、 Wz、 Wy、 Wxが作成される。  Will be performed. In this way, by performing repeated filtering, partial images S, Wz, Wy, and Wx are created for each resolution level j (j = 1 to 3).
[0110] 次に、図 20に示すように、各解像度レベルの部分画像 Wz、 Wy、 Wxに対してべ タトル集中度フィルタ GCによるフィルタ処理が施され、解像度レベル毎のベクトル集 中度が算出される。ベクトル集中度の算出方法については前述したのでここでは説 明を省略する。  [0110] Next, as shown in FIG. 20, the partial image Wz, Wy, Wx at each resolution level is filtered by the vector concentration filter GC, and the vector concentration at each resolution level is calculated. Is done. Since the method for calculating the vector concentration has been described above, a description thereof is omitted here.
[0111] 算出されたベクトル集中度は-ユーラルネットワーク NNに入力される。  [0111] The calculated vector concentration is input to the -Eural network NN.
ニューラルネットワーク NNでは、予め教師データを用いて脳動脈瘤の可能性が高 いほど 1の値を、逆に低ければ 0の値で出力するように、 0〜1の範囲で出力値を出 力するように設計されている。  The neural network NN outputs the output value in the range of 0 to 1 so that the higher the possibility of a cerebral aneurysm, the higher the possibility of a cerebral aneurysm, the lower the value is 0. Designed to be.
[0112] ニューラルネットワーク NNから出力値が得られると、判定部 NMにおいて当該出力 値に基づいて重み画像 Vが生成され、出力される。重み画像 Vはある閾値 (ここでは 0. 8とする)より大きい出力値を有するボタセルについてはボタセル値を 1とし、閾値 0 . 8以下の出力値を有するボタセルについてはボタセル値を 0とすることにより、作成 される。すなわち、閾値である 0. 8を超えるベクトル集中度が算出されたボタセルは、 脳動脈瘤の領域を構成するボタセルである可能性が高い。一方、閾値 0. 8以下のベ タトル集中度を持つボタセルは正常な血管領域を構成するボタセルである可能性が 高い。よって、閾値を境界としてボタセル値を 2値ィ匕することにより、重み画像を作成 するものである。重み画像 Vは再構成バンクに入力される。  [0112] When the output value is obtained from the neural network NN, the determination unit NM generates and outputs a weighted image V based on the output value. In the weighted image V, the botacell value is set to 1 for a botacell having an output value larger than a certain threshold value (here, 0.8), and the botacell value is set to 0 for a botacell having an output value less than or equal to a threshold value 0.8. Is created. In other words, it is highly possible that a botacel for which a vector concentration degree exceeding the threshold value of 0.8 is calculated is a botacell constituting a region of a cerebral aneurysm. On the other hand, it is highly probable that a vessel cell having a vector concentration of less than or equal to the threshold value 0.8 is a vessel cell constituting a normal blood vessel region. Therefore, the weighted image is created by binarizing the botacell values with the threshold as the boundary. The weight image V is input to the reconstruction bank.
[0113] 図 22は、再構成バンクを示す図である。  FIG. 22 is a diagram showing a reconfiguration bank.
再構成バンクは、各部分画像 s、 Wz、 Wy、 Wxに重み付け処理を施した後、フィ j ] j ]  The reconstruction bank weights each partial image s, Wz, Wy, Wx,
ルタバンク S (zj)を経て元の原画像 Sを再構成するものである。 The original image S is reconstructed through the ruta bank S (z j ).
0  0
重み付け処理では、各部分画像 S、 Wz、 Wy、 Wxに重み画像 Vが乗算される。す なわち、重み画像 Vにおいてボタセル毎に設定された 1又は 0の値は、重み付け処理 において重み付け係数として用いられることとなる。重み画像 Vが乗算された各部分 画像はフィルタバンク S (zj)に入力される。 [0114] フィルタバンク S (zj)は、図 23に示すようにフィルタ L (zj)、 K (zj)、 H (zj) L (zj)による フィルタ処理を経て、 S、 Wz、 Wy、 Wxカゝら S を再構成するものである。 In the weighting process, each partial image S, Wz, Wy, Wx is multiplied by the weight image V. In other words, the value of 1 or 0 set for each botacell in the weighted image V is used as a weighting coefficient in the weighting process. Each partial image multiplied by the weight image V is input to the filter bank S (z j ). [0114] Filter bank S (z j ) is filtered by filters L (z j ), K (z j ), H (z j ) L (z j ) as shown in Fig. 23, and S, Wz , Wy, Wx and others reconstruct S.
] ] ] j ]-ι  ]]] j] -ι
ここで、フィルタ L (zj)、 K (zj)は、下記式 9、 10により表される。 Here, the filters L (z j ) and K (z j ) are expressed by the following equations 9 and 10.
[数 6]  [Equation 6]
Kfz =― (-Z3 - 5z + 5z_1 + z"3) · · · (9) Kfz = - (-Z 3 - 5z + 5z _1 + z "3) · · · (9)
16  16
L(z) = -z2 + - + -z"2 ■ ■ ■ (10) L (z) = -z 2 +-+ -z " 2 ■ ■ ■ (10)
4 2 4  4 2 4
[0115] すなわち、 Sに対しては x、 y、 z方向についてそれぞれフィルタ L (zj)によるフィルタ 処理が施される。また、 Wzについては X方向についてはフィルタ K (zj)、 y、 z方向に つ!ヽてはフィルタ H (zj) L (zj)が適用される。 Wyに対しては y方向にっ 、てフィルタ K (zj)、 z方向につ!、てフィルタ H (zj) L (zj)が適用され、 Wxに対しては z方向につ!、て フィルタ ば)が適用される。そして、それぞれのフィルタ処理後の S、 Wz、 Wy、 Wx That is, S is subjected to filtering by the filter L (z j ) in the x, y, and z directions, respectively. For Wz, filter K (z j ) is applied in the X direction, and filter H (z j ) L (z j ) is applied in the y and z directions. For Wy, the filter K (z j ) and z direction are applied in the y direction, and the filter H (z j ) L (z j ) is applied, and for Wx, it is in the z direction! And filter) apply. And S, Wz, Wy, Wx after each filtering
j j j を加算することにより、 S が得られる。  By adding j j j, S is obtained.
] j-i  ] j-i
[0116] S についてはさらにフィルタバンク S (zj_1)によるフィルタ処理にかけられ、 S が [0116] S is further filtered by filter bank S (z j_1 ).
j-l j-2 再構成されることとなる。このようなフィルタ処理を繰り返すことにより元の画像 Sを再  j-l j-2 will be reconstructed. The original image S is re-created by repeating such filtering.
0 構成することができる。  0 can be configured.
[0117] 原画像 Sの再構成は、フィルタバンク A (zj)、 S (zj)における各フィルタ H (zj)、 G (zj o [0117] The reconstruction of the original image S consists of the filters H (z j ) and G (z j o) in the filter banks A (z j ) and S (z j ).
)、 L (zj)、 K(zj)が、図 21、図 23に示すフィルタの組み合わせである場合のみ、完全 に元の画像を再現できるものである。よって、再構成の結果、重み画像 Vにおいてボ クセル値が 1に設定されたボタセルについては完全に元の原画像 Sが再現されるの ), L (z j ), and K (z j ) can reproduce the original image completely only when the combination of filters shown in FIGS. 21 and 23 is used. Therefore, as a result of reconstruction, the original image S is completely reproduced for the votacel whose voxel value is set to 1 in the weighted image V.
0  0
に対し、重み画像 Vにおいてボタセル値が 0に設定されたボタセルについては原画 像 Sが再現されず、そのボタセル値は 0となってしまう。従って、 GCフィルタバンクに On the other hand, in the weighted image V, the original image S is not reproduced for the botacell whose botacell value is set to 0, and the botacell value becomes 0. Therefore, the GC filter bank
0 0
より出力された出力画像 Sは、脳動脈瘤である可能性が高い画像領域のみが再現さ  The output image S that is output is only the image area that is likely to be a cerebral aneurysm.
0  0
れた画像となる。この出力画像 Sにおいて現れた画像が脳動脈瘤の 1次候補である  The resulting image. The image that appears in this output image S is the primary candidate for cerebral aneurysm
[0118] 出力画像 Sが得られると、この出力画像 Sを用いて特徴量の算出が行われる (ステ ップ S105)。特徴量は、第 1実施形態と同様に候補領域の大きさ、球形度、領域内 の各ボタセルにおけるベクトル集中度の平均値を算出する。算出された特徴量を用 いて 2次検出が行われた後 (ステップ S106)、さらに 3次検出が行われ (ステップ S10 7)、最終的な検出結果として出力される(ステップ S108)。ステップ S105〜S108の 処理は、図 2を参照して説明したステップ S5〜S8と同一であるので、詳細な説明は 省略する。 [0118] When the output image S is obtained, the feature amount is calculated using the output image S (steps). S105). As in the first embodiment, the feature amount is calculated by calculating the average value of the size of the candidate region, the sphericity, and the vector concentration degree in each of the botasels in the region. After secondary detection is performed using the calculated feature quantity (step S106), further tertiary detection is performed (step S107), and the final detection result is output (step S108). Since the processing of steps S105 to S108 is the same as steps S5 to S8 described with reference to FIG. 2, detailed description thereof is omitted.
[0119] また、第 2実施形態においても血管部位判別処理が行われるが、第 1実施形態と同 様の処理内容であるので、その処理内容及び効果についての説明は省略する。  [0119] In addition, the blood vessel part discrimination process is also performed in the second embodiment, but since the process contents are the same as those in the first embodiment, description of the process contents and effects is omitted.
[0120] 以上のように、第 2実施形態によれば、 GCフィルタバンクにより頭部画像に対して 多重解像度解析を行って得られた部分画像によって解像度レベル毎にベクトル集中 度を算出する。解像度レベル毎にベクトル集中度を算出することにより、様々な大き さの動脈瘤の検出に対応することができ、より精度の高い検出処理が可能となる。  As described above, according to the second embodiment, the vector concentration degree is calculated for each resolution level from the partial image obtained by performing the multi-resolution analysis on the head image by the GC filter bank. By calculating the vector concentration for each resolution level, it is possible to cope with detection of aneurysms of various sizes, and more accurate detection processing is possible.
[0121] また、ベクトル集中度が閾値以上の所定値となるボタセルのみ元の画像が再現され るようボタセル値を 1に設定し、その他のボタセルについてはボタセル値を 0に設定し た重み画像を生成して各部分画像に乗算して重み付けを行 、、この重み付けされた 各部分画像から元の画像を再構成する。よって、ベクトル集中度が高い、すなわち動 脈瘤である可能性が高い領域のみが再構成されることとなり、動脈瘤の候補領域の み画像ィ匕した再構成画像を得ることができる。このような再構成画像を用いて特徴量 の算出を行うことにより、算出する特徴量力 候補領域以外の画像要素を排除するこ とができ、候補領域についての特徴量を正確に算出することが可能となる。従って、 検出処理自体の精度をさらに向上させることが可能となる。  [0121] In addition, a weighted image in which the botacell value is set to 1 so that the original image is reproduced only for the botacell whose vector concentration is a predetermined value equal to or greater than the threshold value, and the botacell value is set to 0 for the other botacels. Each partial image is generated and multiplied to be weighted, and the original image is reconstructed from the weighted partial images. Therefore, only the region having a high vector concentration degree, that is, the region that is highly likely to be an aneurysm is reconstructed, and a reconstructed image in which only the candidate region of the aneurysm is displayed can be obtained. By calculating the feature value using such a reconstructed image, it is possible to eliminate image elements other than the feature value force candidate area to be calculated, and it is possible to accurately calculate the feature value for the candidate area. It becomes. Therefore, the accuracy of the detection process itself can be further improved.
産業上の利用可能性  Industrial applicability
[0122] 画像処理の分野において利用することが可能であり、医療用の撮影装置により得ら れた頭部画像の画像解析、画像処理を行う医用画像処理装置に適用することができ る。 [0122] The present invention can be used in the field of image processing, and can be applied to a medical image processing apparatus that performs image analysis and image processing of a head image obtained by a medical imaging apparatus.

Claims

請求の範囲 The scope of the claims
[1] 頭部画像を多重解像度解析し、解像度レベル毎に分解した部分画像を用いて、解 像度レベル毎にベクトル集中度を算出する分析手段と、  [1] Analyzing means for performing multi-resolution analysis of the head image and calculating a vector concentration for each resolution level using partial images decomposed for each resolution level;
前記部分画像カゝら元の頭部画像を再構成するにあたり、前記算出されたベクトル 集中度が所定値となる病変部の候補領域でのみ元の画像を再現する再構成手段と 前記再構成された頭部画像を用いて、当該頭部画像に再現された病変部の候補 領域のうち、正常な血管である偽陽性候補領域を削除する削除手段と、  When reconstructing the original head image from the partial image, reconstructing means for reproducing the original image only in a candidate region of a lesion part where the calculated vector concentration degree is a predetermined value; Delete means for deleting a false positive candidate region that is a normal blood vessel from candidate regions of a lesion portion reproduced in the head image using
を備えることを特徴とする医用画像処理装置。  A medical image processing apparatus comprising:
[2] 前記再構成手段は、ベクトル集中度が所定値となる領域の画像のみ再現されるよう に前記部分画像の重み付けを行 、、当該重み付けされた部分画像を用いて元の頭 部画像を再構成することを特徴とする請求の範囲第 1項に記載の医用画像処理装置  [2] The reconstruction means weights the partial image so that only an image of an area where the vector concentration level is a predetermined value is reproduced, and uses the weighted partial image to restore the original head image. The medical image processing apparatus according to claim 1, wherein the medical image processing apparatus is reconfigured
[3] 前記削除手段は、前記再構成された頭部画像に再現された病変部の候補領域に ついて特徴量を算出し、当該特徴量に基づいて前記候補領域から偽陽性候補領域 を削除することを特徴とする請求の範囲第 1項又は第 2項に記載の医用画像処理装 置。 [3] The deletion unit calculates a feature amount for a candidate region of a lesion portion reproduced in the reconstructed head image, and deletes a false positive candidate region from the candidate region based on the feature amount. 3. The medical image processing apparatus according to claim 1 or 2, wherein the medical image processing apparatus is characterized in that:
[4] 前記特徴量は、前記再構成された頭部画像に再現された病変部の候補領域の大 きさ、球形度、当該候補領域におけるべ外ル集中度の平均若しくはべ外ル集中度 の最大値のうち、少なくとも 1以上であることを特徴とする請求の範囲第 3項に記載の 医用画像処理装置。  [4] The feature amount is the size of the candidate area of the lesion, the sphericity, the average of the concentration of the outer region in the candidate region, or the degree of the outer region concentration, which is reproduced in the reconstructed head image. 4. The medical image processing apparatus according to claim 3, wherein the medical image processing apparatus is at least one or more of the maximum values.
[5] 前記病変部は、血管において生じる瘤であり、  [5] The lesion is an aneurysm that occurs in a blood vessel,
前記頭部画像から血管像を抽出する抽出手段を備え、  Comprising extraction means for extracting a blood vessel image from the head image,
前記画像構成手段は、前記血管像が抽出された画像を用いて多重解像度解析及 び元の画像の再構成を行うことを特徴とする請求の範囲第 1項〜第 4項の何れか一 項に記載の医用画像処理装置。  5. The image forming apparatus according to claim 1, wherein the image construction unit performs multi-resolution analysis and reconstruction of the original image using the image from which the blood vessel image has been extracted. The medical image processing apparatus described in 1.
[6] 前記頭部画像は、 MRIにより撮影された MRI画像であることを特徴とする請求の範 囲第 1項〜第 5項の何れか一項に記載の医用画像処理装置。 [6] The medical image processing apparatus according to any one of [1] to [5], wherein the head image is an MRI image taken by MRI.
[7] 前記 MRI画像は、 MRIにおいて MRAの撮影方法により撮影された MRA画像で あることを特徴とする請求の範囲第 1項〜第 6項の何れか一項に記載の医用画像処 理装置。 [7] The medical image processing apparatus according to any one of [1] to [6], wherein the MRI image is an MRA image obtained by an MRA imaging method in MRI. .
[8] 前記頭部画像から血管像を抽出する抽出手段と、  [8] extraction means for extracting a blood vessel image from the head image;
前記抽出された血管像に含まれる一又は複数の血管部位を判別し、当該判別され た血管部位に関する血管部位情報を前記頭部画像に付帯させる画像制御手段と、 を備えることを特徴とする請求の範囲第 1項に記載の医用画像処理装置。  An image control means for discriminating one or a plurality of blood vessel parts included in the extracted blood vessel image and attaching blood vessel part information relating to the discriminated blood vessel part to the head image. The medical image processing apparatus according to claim 1, wherein
[9] 前記画像制御手段は、血管像に含まれる一又は複数の血管部位の位置及び名称 が予め定められた参照画像を用いて、前記頭部画像における血管部位の位置及び 名称を判別し、当該判別された各血管部位の位置及び名称の情報を前記血管部位 情報として前記頭部画像に付帯させることを特徴とする請求の範囲第 8項に記載の 医用画像処理装置。 [9] The image control means determines the position and name of the blood vessel part in the head image using a reference image in which the position and name of one or more blood vessel parts included in the blood vessel image are predetermined, 9. The medical image processing apparatus according to claim 8, wherein information on the position and name of each determined blood vessel part is attached to the head image as the blood vessel part information.
[10] 前記画像制御手段は、前記頭部画像をァフィン変換することにより、当該ァフィン 変換された頭部画像の血管像の位置と前記参照画像における血管像の位置とを略 一致させ、この略一致させた参照画像に予め定められた血管部位と対応する、前記 頭部画像の血管像部分を当該定められた血管部位であると判断することを特徴とす る請求の範囲第 9項に記載の医用画像処理装置。  [10] The image control means performs affine transformation on the head image to substantially match the position of the blood vessel image of the head image subjected to the affine transformation with the position of the blood vessel image in the reference image. 10. The blood vessel image portion of the head image corresponding to a predetermined blood vessel portion in the matched reference image is determined as the predetermined blood vessel portion. Medical image processing apparatus.
[11] 前記頭部画像を表示する表示手段と、  [11] display means for displaying the head image;
前記頭部画像に付帯された血管部位情報に基づき、前記表示された頭部画像の 血管像に含まれる一又は複数の血管部位を判別し、それら血管部位のそれぞれを 前記表示された頭部画像において識別表示させる表示制御手段と、  Based on the blood vessel part information attached to the head image, one or more blood vessel parts included in the blood vessel image of the displayed head image are determined, and each of the blood vessel parts is displayed on the displayed head image. Display control means for identifying and displaying in
を備えることを特徴とする請求の範囲第 9項又は第 10項に記載の医用画像処理装 置。  The medical image processing apparatus according to claim 9 or 10, characterized by comprising:
[12] 前記表示制御手段は、前記血管部位情報に基づき、前記識別表示された各血管 部位の名称を判別し、当該判別された名称を対応する各血管部位に関連付けて表 示させることを特徴とする請求の範囲第 11項に記載の医用画像処理装置。  [12] The display control means determines the name of each identified and displayed blood vessel part based on the blood vessel part information and displays the determined name in association with each corresponding blood vessel part. The medical image processing apparatus according to claim 11.
[13] 表示操作を行うための操作手段を備え、  [13] An operation means for performing a display operation is provided.
前記表示制御手段は、前記頭部画像にぉ 、て表示対象の血管部位が前記操作 手段を介して選択操作されると、前記選択された血管部位に対応する血管像のみを 前記頭部画像から抽出し、当該抽出された血管像のみを前記表示手段上に表示さ せることを特徴とする請求の範囲第 12項に記載の医用画像処理装置。 The display control means is adapted to display the operation target blood vessel portion on the head image. When a selection operation is performed through the means, only a blood vessel image corresponding to the selected blood vessel part is extracted from the head image, and only the extracted blood vessel image is displayed on the display means. The medical image processing apparatus according to claim 12.
[14] 頭部画像を多重解像度解析し、解像度レベル毎に分解した部分画像を用いて、解 像度レベル毎にベクトル集中度を算出する分析工程と、 [14] An analysis step of performing multi-resolution analysis of the head image and calculating a vector concentration for each resolution level using partial images decomposed for each resolution level;
前記部分画像カゝら元の頭部画像を再構成するにあたり、前記算出されたベクトル 集中度が所定値となる病変部の候補領域でのみ元の画像を再現する再構成工程と 前記再構成された頭部画像を用いて、当該頭部画像に再現された病変部の候補 領域のうち、正常な血管である偽陽性候補領域を削除する削除工程と、  In reconstructing the original head image from the partial image, a reconstruction step of reproducing the original image only in a candidate region of a lesion part where the calculated vector concentration degree is a predetermined value; and A deletion step of deleting a false positive candidate region that is a normal blood vessel from candidate regions of a lesion portion reproduced in the head image using
を含むことを特徴とする画像処理方法。  An image processing method comprising:
[15] 前記再構成工程では、ベクトル集中度が所定値となる病変部の候補領域の画像の み再現されるように前記部分画像の重み付けを行 ヽ、当該重み付けされた部分画像 を用いて元の頭部画像を再構成することを特徴とする請求の範囲第 14項に記載の 画像処理方法。 [15] In the reconstruction step, weighting of the partial image is performed so that only the image of the candidate region of the lesion part having a predetermined vector concentration degree is reproduced, and the weighted partial image is used to reproduce the original image. 15. The image processing method according to claim 14, wherein the head image is reconstructed.
[16] 前記削除工程では、前記再構成された頭部画像に再現された病変部の候補領域 について特徴量を算出し、当該特徴量に基づいて前記候補領域から偽陽性候補領 域を削除することを特徴とする請求の範囲第 14項又は第 15項に記載の画像処理方 法。  [16] In the deletion step, a feature amount is calculated for a candidate region of a lesion portion reproduced in the reconstructed head image, and a false positive candidate region is deleted from the candidate region based on the feature amount 16. The image processing method according to claim 14 or 15, characterized in that:
[17] 前記特徴量は、前記再構成された頭部画像に再現された病変部の候補領域の大 きさ、球形度、当該候補領域におけるべ外ル集中度の平均若しくはべ外ル集中度 の最大値のうち、少なくとも 1以上であることを特徴とする請求の範囲第 16項に記載 の画像処理方法。  [17] The feature amount is the size of the candidate area of the lesion portion reproduced in the reconstructed head image, the sphericity, the average of the outer concentration of the candidate area, or the outer concentration of the candidate area. 17. The image processing method according to claim 16, wherein the image processing method is at least one of the maximum values.
[18] 前記病変部は、血管において生じる瘤であり、 [18] The lesion is an aneurysm that occurs in a blood vessel,
前記頭部画像から血管像を抽出する抽出工程をさらに含み、  An extraction step of extracting a blood vessel image from the head image,
前記再構成工程では、前記血管像が抽出された画像を用いて多重解像度解析及 び元の画像の再構成を行うことを特徴とする請求の範囲第 14項〜第 17項の何れか 一項に記載の画像処理方法。 18. The reconstruction process according to any one of claims 14 to 17, wherein in the reconstruction step, multi-resolution analysis and reconstruction of an original image are performed using an image from which the blood vessel image has been extracted. An image processing method described in 1.
[19] 前記頭部画像は、 MRIにより撮影された MRI画像であることを特徴とする請求の範 囲第 14項〜第 18項の何れか一項に記載の画像処理方法。 [19] The image processing method according to any one of [14] to [18], wherein the head image is an MRI image taken by MRI.
[20] 前記 MRI画像は、 MRIにおいて MRAの撮影方法により撮影された MRA画像で あることを特徴とする請求の範囲第 14項〜第 19項の何れか一項に記載の画像処理 方法。 [20] The image processing method according to any one of claims 14 to 19, wherein the MRI image is an MRA image captured by an MRA imaging method in MRI.
[21] 前記頭部画像力 血管像を抽出する抽出工程と、  [21] An extraction step of extracting the head image force blood vessel image;
前記抽出された血管像に含まれる一又は複数の血管部位を判別し、当該判別され た血管部位に関する血管部位情報を前記頭部画像に付帯させる画像制御工程と、 を含むことを特徴とする請求の範囲第 14項に記載の画像処理方法。  An image control step of discriminating one or a plurality of blood vessel parts included in the extracted blood vessel image and attaching blood vessel part information relating to the discriminated blood vessel part to the head image. Item 15. The image processing method according to Item 14.
[22] 前記画像制御工程では、血管像に含まれる一又は複数の血管部位の位置及び名 称が予め定められた参照画像を用いて、前記頭部画像における血管部位の位置及 び名称を判別し、当該判別された各血管部位の位置及び名称の情報を前記血管部 位情報として前記頭部画像に付帯させることを特徴とする請求の範囲第 21項に記載 の画像処理方法。 [22] In the image control step, the position and name of the blood vessel part in the head image are determined using a reference image in which the position and name of one or more blood vessel parts included in the blood vessel image are determined in advance. The image processing method according to claim 21, wherein information on the position and name of each determined blood vessel part is attached to the head image as the blood vessel part information.
[23] 前記画像制御工程では、前記頭部画像をァフィン変換することにより、当該ァフィン 変換された頭部画像の血管像の位置と前記参照画像における血管像の位置とを略 一致させ、この略一致させた参照画像に予め定められた血管部位と対応する、前記 頭部画像の血管像部分を当該定められた血管部位であると判断することを特徴とす る請求の範囲第 22項に記載の画像処理方法。  [23] In the image control step, the head image is affine transformed to substantially match the position of the blood vessel image of the head image subjected to the affine transformation and the position of the blood vessel image in the reference image. 23. The range according to claim 22, wherein a blood vessel image portion of the head image corresponding to a predetermined blood vessel portion corresponding to the matched reference image is determined to be the predetermined blood vessel portion. Image processing method.
[24] 前記頭部画像を表示手段に表示する表示工程と、  [24] a display step of displaying the head image on a display means;
前記頭部画像に付帯された血管部位情報に基づき、前記表示された頭部画像の 血管像に含まれる一又は複数の血管部位を判別し、それら血管部位のそれぞれを 前記表示された頭部画像において識別表示させる表示制御工程と、  Based on the blood vessel part information attached to the head image, one or more blood vessel parts included in the blood vessel image of the displayed head image are determined, and each of the blood vessel parts is displayed on the displayed head image. A display control process for identifying and displaying in
を含むことを特徴とする請求の範囲第 22項又は第 23項に記載の画像処理方法。  24. The image processing method according to claim 22 or 23, comprising:
[25] 前記表示制御工程では、前記血管部位情報に基づき、前記識別表示された各血 管部位の名称を判別し、当該判別された名称を対応する各血管部位に関連付けて 表示させることを特徴とする請求の範囲第 24項に記載の画像処理方法。  [25] In the display control step, the name of each identified and displayed blood vessel part is determined based on the blood vessel part information, and the determined name is displayed in association with each corresponding blood vessel part. The image processing method according to claim 24.
[26] 前記表示制御工程では、前記頭部画像にお!、て表示対象の血管部位が操作手段 を介して選択操作されると、前記選択された血管部位に対応する血管像のみを前記 頭部画像から抽出し、当該抽出された血管像のみを前記表示手段上に表示させるこ とを特徴とする請求の範囲第 25項に記載の画像処理方法。 [26] In the display control step, the blood vessel part to be displayed is displayed on the head image as an operating means. When the selection operation is performed via the button, only the blood vessel image corresponding to the selected blood vessel part is extracted from the head image, and only the extracted blood vessel image is displayed on the display means. The image processing method according to claim 25.
PCT/JP2006/316595 2005-08-31 2006-08-24 Medical image processor and image processing method WO2007026598A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007533204A JP4139869B2 (en) 2005-08-31 2006-08-24 Medical image processing device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2005250915 2005-08-31
JP2005-250915 2005-08-31
JP2006-083950 2006-03-24
JP2006083950 2006-03-24

Publications (1)

Publication Number Publication Date
WO2007026598A1 true WO2007026598A1 (en) 2007-03-08

Family

ID=37808692

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/316595 WO2007026598A1 (en) 2005-08-31 2006-08-24 Medical image processor and image processing method

Country Status (2)

Country Link
JP (1) JP4139869B2 (en)
WO (1) WO2007026598A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009039446A (en) * 2007-08-10 2009-02-26 Fujifilm Corp Image processing apparatus, image processing method, and image processing program
JP2009106443A (en) * 2007-10-29 2009-05-21 Toshiba Corp Medical imageing device, medical image processor, and medical image processing program
CN102984990A (en) * 2011-02-01 2013-03-20 奥林巴斯医疗株式会社 Diagnosis assistance apparatus
WO2013058114A1 (en) * 2011-10-17 2013-04-25 株式会社東芝 Medical image processing system
JP2014000483A (en) * 2007-07-24 2014-01-09 Toshiba Corp Computerized transverse axial tomography and image processor
JP2014124269A (en) * 2012-12-25 2014-07-07 Toshiba Corp Ultrasonic diagnostic device
JP2014237005A (en) * 2007-04-20 2014-12-18 メディシム・ナムローゼ・フエンノートシャップ Method for deriving shape information
JP2015084936A (en) * 2013-10-30 2015-05-07 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Magnetic resonance apparatus and program
KR101821353B1 (en) * 2016-08-30 2018-01-23 삼성전자주식회사 Magnetic resonance imaging apparatus
CN108496205A (en) * 2015-12-30 2018-09-04 皇家飞利浦有限公司 The threedimensional model of body part
JP2018201569A (en) * 2017-05-30 2018-12-27 国立大学法人九州大学 Map information generation method, determination method, and program
US10467750B2 (en) 2017-07-21 2019-11-05 Panasonic Intellectual Property Management Co., Ltd. Display control apparatus, display control method, and recording medium
JP2020014712A (en) * 2018-07-26 2020-01-30 株式会社日立製作所 Medical image processing device and medical image processing method
JP2020171480A (en) * 2019-04-10 2020-10-22 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system
WO2021075026A1 (en) * 2019-10-17 2021-04-22 株式会社ニコン Image processing method, image processing device, and image processing program
CN112842264A (en) * 2020-12-31 2021-05-28 哈尔滨工业大学(威海) Digital filtering method and device in multi-modal imaging and multi-modal imaging technical system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101373563B1 (en) * 2012-07-25 2014-03-12 전북대학교산학협력단 Method of derivation for hemodynamics and MR-signal intensity gradient(or shear rate) using Time-Of-Flight - Magnetic Resonance Angiography

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6214590A (en) * 1985-07-12 1987-01-23 Toshiba Corp Image diagnosing device
JPH1094538A (en) * 1996-09-25 1998-04-14 Fuji Photo Film Co Ltd Method and apparatus for detecting abnormal shade candidate
JP2002109510A (en) * 2000-09-27 2002-04-12 Fuji Photo Film Co Ltd Possible abnormal shadow detecting and processing system
JP2002515772A (en) * 1995-11-10 2002-05-28 ベス・イスラエル・デイーコネス・メデイカル・センター Imaging device and method for canceling movement of a subject
JP2002203248A (en) * 2000-11-06 2002-07-19 Fuji Photo Film Co Ltd Measurement processor for geometrically measuring image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6214590A (en) * 1985-07-12 1987-01-23 Toshiba Corp Image diagnosing device
JP2002515772A (en) * 1995-11-10 2002-05-28 ベス・イスラエル・デイーコネス・メデイカル・センター Imaging device and method for canceling movement of a subject
JPH1094538A (en) * 1996-09-25 1998-04-14 Fuji Photo Film Co Ltd Method and apparatus for detecting abnormal shade candidate
JP2002109510A (en) * 2000-09-27 2002-04-12 Fuji Photo Film Co Ltd Possible abnormal shadow detecting and processing system
JP2002203248A (en) * 2000-11-06 2002-07-19 Fuji Photo Film Co Ltd Measurement processor for geometrically measuring image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ARIMURA H. ET AL.: "Tobu MRA ni Okeru Nodomyakuryu Kenshutsu no CAD System", INNERVISION, vol. 19, no. 10, 25 September 2004 (2004-09-25), pages 22 - 25, XP003009849 *
MASUMOTO T.: "MR Angiography o Riyo shita No Domyakuryu no Computer Shien Gazo Shindan (CAD) no Kenkyu", INNERVISION, vol. 20, no. 8, 25 June 2005 (2005-06-25), pages 36, XP003009851 *
NAKAYAMA R. ET AL.: "Iyo Gazo ni Okeru Enkei.Senjo Pattern Kenshutsu no Tame no Filter Bank no Kochiku", THE TRANSACTIONS OF THE INSTITUTE OF ELECTROS, vol. J-87-D-II, no. 1, 1 January 2004 (2004-01-01), pages 176 - 185, XP003009850 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9439608B2 (en) 2007-04-20 2016-09-13 Medicim Nv Method for deriving shape information
JP2014237005A (en) * 2007-04-20 2014-12-18 メディシム・ナムローゼ・フエンノートシャップ Method for deriving shape information
JP2014000483A (en) * 2007-07-24 2014-01-09 Toshiba Corp Computerized transverse axial tomography and image processor
JP2009039446A (en) * 2007-08-10 2009-02-26 Fujifilm Corp Image processing apparatus, image processing method, and image processing program
JP2009106443A (en) * 2007-10-29 2009-05-21 Toshiba Corp Medical imageing device, medical image processor, and medical image processing program
CN102984990A (en) * 2011-02-01 2013-03-20 奥林巴斯医疗株式会社 Diagnosis assistance apparatus
JP2013085652A (en) * 2011-10-17 2013-05-13 Toshiba Corp Medical image processing system
US9192347B2 (en) 2011-10-17 2015-11-24 Kabushiki Kaisha Toshiba Medical image processing system applying different filtering to collateral circulation and ischemic blood vessels
CN103327899A (en) * 2011-10-17 2013-09-25 株式会社东芝 Medical image processing system
CN103327899B (en) * 2011-10-17 2016-04-06 株式会社东芝 Medical image processing system
WO2013058114A1 (en) * 2011-10-17 2013-04-25 株式会社東芝 Medical image processing system
JP2014124269A (en) * 2012-12-25 2014-07-07 Toshiba Corp Ultrasonic diagnostic device
JP2015084936A (en) * 2013-10-30 2015-05-07 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Magnetic resonance apparatus and program
JP2019500146A (en) * 2015-12-30 2019-01-10 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 3D body model
CN108496205B (en) * 2015-12-30 2023-08-15 皇家飞利浦有限公司 Three-dimensional model of body part
CN108496205A (en) * 2015-12-30 2018-09-04 皇家飞利浦有限公司 The threedimensional model of body part
US11200750B2 (en) 2015-12-30 2021-12-14 Koninklijke Philips N.V. Three dimensional model of a body part
WO2018043878A1 (en) * 2016-08-30 2018-03-08 삼성전자주식회사 Magnetic resonance imaging device
KR101821353B1 (en) * 2016-08-30 2018-01-23 삼성전자주식회사 Magnetic resonance imaging apparatus
JP2018201569A (en) * 2017-05-30 2018-12-27 国立大学法人九州大学 Map information generation method, determination method, and program
US10467750B2 (en) 2017-07-21 2019-11-05 Panasonic Intellectual Property Management Co., Ltd. Display control apparatus, display control method, and recording medium
JP2020014712A (en) * 2018-07-26 2020-01-30 株式会社日立製作所 Medical image processing device and medical image processing method
JP2020171480A (en) * 2019-04-10 2020-10-22 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system
CN111820898A (en) * 2019-04-10 2020-10-27 佳能医疗系统株式会社 Medical image processing apparatus and medical image processing system
JP7271277B2 (en) 2019-04-10 2023-05-11 キヤノンメディカルシステムズ株式会社 Medical image processing device and medical image processing system
CN111820898B (en) * 2019-04-10 2024-04-05 佳能医疗系统株式会社 Medical image processing device and medical image processing system
WO2021075026A1 (en) * 2019-10-17 2021-04-22 株式会社ニコン Image processing method, image processing device, and image processing program
CN112842264A (en) * 2020-12-31 2021-05-28 哈尔滨工业大学(威海) Digital filtering method and device in multi-modal imaging and multi-modal imaging technical system

Also Published As

Publication number Publication date
JPWO2007026598A1 (en) 2009-03-26
JP4139869B2 (en) 2008-08-27

Similar Documents

Publication Publication Date Title
JP4139869B2 (en) Medical image processing device
JP4823204B2 (en) Medical image processing device
US10593035B2 (en) Image-based automated measurement model to predict pelvic organ prolapse
EP3367331A1 (en) Deep convolutional encoder-decoder for prostate cancer detection and classification
CN110036408B (en) Automatic ct detection and visualization of active bleeding and blood extravasation
US7058210B2 (en) Method and system for lung disease detection
US7283652B2 (en) Method and system for measuring disease relevant tissue changes
EP1728213B1 (en) Method and apparatus for identifying pathology in brain images
EP2116973B1 (en) Method for interactively determining a bounding surface for segmenting a lesion in a medical image
US20060062447A1 (en) Method for simple geometric visualization of tubular anatomical structures
EP2846310A2 (en) Method and apparatus for registering medical images
US20030208116A1 (en) Computer aided treatment planning and visualization with image registration and fusion
US20070237372A1 (en) Cross-time and cross-modality inspection for medical image diagnosis
US7684602B2 (en) Method and system for local visualization for tubular structures
US20080171932A1 (en) Method and System for Lymph Node Detection Using Multiple MR Sequences
JP2008515466A (en) Method and system for identifying an image representation of a class of objects
JP2008073338A (en) Medical image processor, medical image processing method and program
US7747051B2 (en) Distance transform based vessel detection for nodule segmentation and analysis
JP2009226043A (en) Medical image processor and method for detecting abnormal shadow
JP2006520233A (en) 3D imaging apparatus and method for signaling object of interest in volume data
US7961923B2 (en) Method for detection and visional enhancement of blood vessels and pulmonary emboli
CN1836258B (en) Method and system for using structure tensors to detect lung nodules and colon polyps
CN111862014A (en) ALVI automatic measurement method and device based on left and right ventricle segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007533204

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06782999

Country of ref document: EP

Kind code of ref document: A1