US20050017972A1 - Displaying image data using automatic presets - Google Patents

Displaying image data using automatic presets Download PDF

Info

Publication number
US20050017972A1
US20050017972A1 US10/922,700 US92270004A US2005017972A1 US 20050017972 A1 US20050017972 A1 US 20050017972A1 US 92270004 A US92270004 A US 92270004A US 2005017972 A1 US2005017972 A1 US 2005017972A1
Authority
US
United States
Prior art keywords
voxels
data set
interest
tissue type
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/922,700
Inventor
Ian Poole
Andrew Bissell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voxar Ltd
Original Assignee
Voxar Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voxar Ltd filed Critical Voxar Ltd
Priority to US10/922,700 priority Critical patent/US20050017972A1/en
Assigned to VOXAR LIMITED reassignment VOXAR LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BISSELL, ANDREW JOHN, POOLE, IAN
Publication of US20050017972A1 publication Critical patent/US20050017972A1/en
Assigned to BARCOVIEW MIS EDINBURGH, A UK BRANCH OF BARCO NV reassignment BARCOVIEW MIS EDINBURGH, A UK BRANCH OF BARCO NV LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: VOXAR LIMITED
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7445Display arrangements, e.g. multiple display units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the invention relates to the setting of visualization parameter boundaries, such as color and opacity boundaries, for displaying images, in particular two-dimensional (2D) projections from three-dimensional (3D) data sets.
  • visualization parameter boundaries such as color and opacity boundaries
  • the 2D data set is more amenable to user interpretation if different colors and opacities are allocated to different signal values in the 3D data set.
  • the details of the mapping of signal values to colors and opacities are stored in a look-up table which is often referred to as the RGBA color table (R, G, B and A referring to red, green, blue and alpha (for opacity) respectively).
  • the color table can be defined such that an entire color and opacity range is uniformly distributed between the minimum and maximum signal values in the voxel data set, as in a gray scale.
  • the color table can be defined by attributing different discrete colors and opacities to different signal value ranges. In more sophisticated approaches, different sub-ranges are ascribed different colors (e.g. red) and the shade of the color is smoothly varied across each sub-range (e.g. crimson to scarlet).
  • the signal values comprising the data set do not usually correspond to what would normally be regarded as visual properties, such as color or intensity, but instead correspond to detected signal values from the measuring system used, such as computer-assisted tomography (CT) scanners, magnetic resonance (MR) scanners, ultrasound scanners and positron-emission-tomography (PET) systems.
  • CT computer-assisted tomography
  • MR magnetic resonance
  • PET positron-emission-tomography
  • signal values from CT scanning will represent tissue opacity, i.e. X-ray attenuation.
  • tissue opacity i.e. X-ray attenuation.
  • it is known to map different colors and opacities to different ranges of display value such that particular features, e.g. bone (which will generally have a relatively high opacity) can be more clearly distinguished from soft tissue (which will generally have a relatively low opacity).
  • voxels within the 3D data set may also be selected for removal from the projected 2D image to reveal other more interesting features.
  • the choice of which voxels are to be removed, or sculpted, from the projected image can also be based on the signal value associated with particular voxels. For example, those voxels having signal values which correspond to soft tissue can be sculpted, i.e. not rendered and therefore “invisible”, thereby revealing those voxels having signal values corresponding to bone which would otherwise be visually obscured by the soft tissue.
  • the determination of the most appropriate color table (known in the art as a preset) to apply to an image derived from a particular 3D data set is not trivial and is dependent on many features of the 3D data set.
  • the details of a suitable color table will depend on the subject, what type of data is being represented, whether (and if so, how) the data are calibrated and what particular features of the 3D data set the user might wish to highlight, which will depend on the clinical application. It can therefore be a difficult and laborious task to produce a displayed image that is clinically useful.
  • there is inevitably an element of user-subjectivity in manually defining a color table and this can create difficulties in comparing and interpreting images created by different users, or even supposedly similar images created by a single user.
  • the user will generally base the choice of color table on a specific 2D projection of the 3D data set rather than on characteristics of the overall 3D data set.
  • a color table chosen for application to one particular projected image will not necessarily be appropriate to another projection of the same 3D data set.
  • a color table which is objectively based on characteristics of the 3D data set rather than a single projection would be preferred.
  • a method of setting visualization parameter boundaries for displaying an image from a 3D data set comprising a plurality of voxels, each with an associated signal value comprising: selecting a volume of interest (VOI) within the 3D data set; generating a histogram of signal values from voxels that are within the VOI; applying a numerical analysis method to the histogram to determine a visualization threshold; and setting at least one of a plurality of boundaries for a visualization parameter according to the visualization threshold.
  • VOI volume of interest
  • a numerical analysis method can be applied to the histogram which is sensitive to subtle variations in signal value and can reliably identify significant boundaries within the 3D data set for visualization. This allows the visualization parameter boundaries to be set automatically, which is especially useful for 3D data sets for which the signal values have no calibration, as is the case for MR scans.
  • first visualization parameter boundary is set at the visualization threshold.
  • first and second visualization parameter boundaries are set either side of the visualization threshold. This latter approach can be advantageous if an opacity curve interpolation algorithm is used to calculate an opacity curve between the visualization parameter boundaries.
  • the numerical analysis method may be applied once to determine only one visualization threshold. Remaining visualization parameter boundaries can then be set manually. Alternatively, the numerical analysis method can be applied iteratively to the histogram to determine a plurality of visualization thresholds and corresponding visualization parameter boundaries.
  • a significance test may be applied to visualization thresholds and, according to the outcome of the significance test, a significance marker can be ascribed for those ones of the voxels having signal values at or adjacent the visualization threshold, wherein the significance marker indicates significance or insignificance of the visualization threshold.
  • two visualization parameter boundaries are set, one each side of the visualization threshold, and the visualization threshold is determined to be significant, then it is convenient to mark as significant only the voxels having signal values at one of the two visualization parameter boundaries.
  • a visualization threshold is calculated by the numerical analysis method to lie at a signal value of 54, and visualization parameter boundaries are set at 54 ⁇ 3, i.e. at 51 and 57, then the voxels with signal values of 57 can be marked as significant, and the voxels with signal values of 51 as insignificant.
  • the significance test can be used to distinguish between visualization parameter boundaries used as enhancements to visualizations of a single tissue type (known as cosmetic boundaries) and those used to identify different tissue-types for the purpose of segmentation (known as significant boundaries). Accordingly, the method may further comprise applying a selection tool to the 3D data set, wherein the selection tool is sensitive to the significance markers. One or more of the selection tools can be designed to ignore voxels that have been marked as insignificant.
  • the rate of change of a visualization parameter across a visualization parameter boundary may also be modified based on the significance of the visualization parameter boundary.
  • a sharpness parameter can be calculated for determining what rate of change of the visualization parameter to apply at a boundary.
  • the sharpness parameter is the same as the significance marker.
  • the sharpness need not simply be a binary operand, but can adopt a range of integer values, for example from 0 to 100.
  • a sharpness of zero indicates an insignificant boundary, which is referred to as a cosmetic boundary in view of its irrelevance to selection tools.
  • a sharpness of 100 indicates a boundary that has the maximum degree of significance. Intermediate values are used to indicate intermediate significance.
  • the non-zero values may be used for filtering by the selection tools so that boundaries with a significance value of, for example, 5 are significant to some but not all selection tools, a boundary with a significance value of 50 is significant for a greater subset of the selection tools, and a boundary with the maximum significance value of 100 is significant to all selection tools.
  • the non-zero significance values may be used by selection tools to resolve conflicts between different marked boundaries, with boundaries having higher significance values taking precedence. Examples of selection tools are tools for marking objects in a set of connected or unconnected voxels with a visualization parameter (e.g.
  • the numerical analysis method comprises: forming a convex hull of a plurality of segments around the histogram; determining which perpendicular from the segments to the histogram has the greatest length; and taking the signal value at the intersection between the histogram and the perpendicular as the visualization threshold.
  • the sharpness value and the significance test can then be based on the length of the perpendicular determined to have the greatest length.
  • the visualization threshold can be determined to be insignificant if the ratio of the length of the perpendicular to a parameter derived from the signal value range and/or the frequency range of the histogram is below a minimum score.
  • the numerical analysis method is applied to the histogram within a predetermined restricted range of signal values to search for a visualization threshold within that restricted range.
  • a restricted range may be defined in terms of Hounsfield units.
  • the histogram and its visualization parameter boundaries can be displayed to the user together with the image created from the 3D data set, thus making the user aware of the visualization parameter boundaries determined by the automatic preset.
  • the method of the invention is particularly powerful in that it can take account of sculpting performed on the 3D data set prior to automatic preset determination according to the invention.
  • a common example of sculpting will be when a plane is defined through a 3D data set and all voxels to one side of the plane are not rendered, irrespective of their signal values.
  • Another example of sculpting will be the removal of a given set of connected voxels with signal values in a specified range, thus restricting the range of signal values to be visualized prior to determining an automatic preset.
  • Sculpting can be taken account of by restricting the histogram to unsculpted voxels in the VOI.
  • voxels with the highest and lowest signal values often constitute bad data which can skew the results of the numerical analysis of the histogram. Accordingly, it is preferred that voxels with the highest and/or the lowest signal values are excluded from the numerical analysis method. For example, the voxels with the lowest and highest 0.1% of the signal values can be excluded. Other proportions could also be envisaged.
  • the method may operate interactively. In such cases, if a user re-defines the VOI, the method of setting visualization parameter boundaries is automatically reapplied to continuously provide the most appropriate visualization parameter boundaries.
  • the invention further provides a computer program product bearing computer readable instructions for performing the method of the invention.
  • the invention also provides a computer apparatus loaded with computer readable instructions for performing the method of the invention.
  • a method of numerically processing a medical image data set comprising voxels comprising: receiving user input to positively and negatively select voxels that are and are not of a tissue type of interest; determining a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and classifying further voxels in the medical image data set on the basis of the distinguishing function.
  • This method thus applies supervised pattern recognition to classify the voxels.
  • the method By receiving input in response to a user specifying both positive examples of voxels (i.e. those which do correspond to the tissue type of interest) and negative examples of voxels (i.e. those which do not correspond to the tissue type of interest), the method is able to objectively classify further voxels in the data set. Because of this, the method provides for an easy and intuitive to use technique for allowing users to select regions of interest for further examination or removal from the data set.
  • the method may include presenting a representative (2D) image derived from the (3D) medical image data set to a user, such as a sagittal, coronal or transverse section view, whereby the user selects voxels by positioning a pointer at appropriate locations in the example image.
  • An example voxel may then be taken to be a voxel whose coordinates in the medical image data set map to the location of the pointer in the example image.
  • a number of example voxels may be selected, for example those in a region surrounding a voxel whose coordinates in the data set map to the location of the pointer in the example image may be taken as being selected. Selecting multiple voxels with a single positioning of the cursor allows for a more statistically significant sample of example voxels to be provided with little additional user input.
  • At least one of the one or more characterizing parameters of a voxel may be a function of surrounding voxels. For example, a local average, a local standard deviation, gradient magnitude, Laplacian, minimum value, maximum value or any other parameterization may be used. This allows voxels to be classified on the basis of characteristics of their surroundings, rather than simply on the basis of their voxel value. This means that similar tissue types can be properly classified more accurately than with conventional classification methods based on voxel value alone. This is because subtle difference in “texture” in the vicinity of a voxel can help to distinguish it from other voxels having otherwise similar voxel values.
  • the user input may additionally include clinical information, such as specification of tissue type or anatomical feature, regarding either the positively or negatively selected voxels, or both.
  • clinical information such as specification of tissue type or anatomical feature, regarding either the positively or negatively selected voxels, or both.
  • an image of the data set may be rendered which takes account of the classification of voxels.
  • the rendered image may then be displayed to the user.
  • the positively selected voxels may be tinted with a color in a monochrome gray scale rendering.
  • a binary classification may be used whereby voxels are classified as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest.
  • voxels classified as not corresponding to the tissue type of interest may be rendered as transparent or semi-transparent in a displayed image.
  • the general practice of rendering features that are not of interest as semi-transparent is sometimes referred to as “dimming” in the art.
  • voxels which are classified as corresponding to the tissue type of interest may be rendered as transparent, or voxels classified as corresponding to the tissue type of interest may be rendered to be displayed in one range of displayable colors and voxels classified as not corresponding to the tissue type of interest being rendered to be displayed in another range of displayable colors.
  • An image based on rendering a volume data set representing the value of the distinguishing function of the voxels can also be made.
  • voxels may be classified according to a calculated probability that they correspond to the tissue type of interest.
  • an image may be generated by rendering of a volume data set representing the probability that the voxels correspond to the tissue type of interest, rather than rendering based on voxel values themselves.
  • the probability can be mapped onto opacity of the rendered material instead of taking a threshold.
  • Another approach would be to render as transparent any voxels having a probability of corresponding to the tissue type of interest of less than a certain value.
  • the probabilities per voxel may themselves be considered as voxel values in a medical image data set which may be re-classified according in a subsequent iteration of the method. This implements a form of relaxation labeling.
  • the user input can be prompted by displaying an image to a user from a 3D data set comprising a plurality of voxels, each with an associated signal value, for example by selecting a volume of interest (VOI) within the 3D data set; generating a histogram of signal values from voxels that are within the VOI; applying a numerical analysis method to the histogram to determine a visualization threshold; and setting at least one of a plurality of boundaries for a visualization parameter according to the visualization threshold.
  • VOI volume of interest
  • an apparatus for numerically processing a medical image data set comprising voxels comprising: storage from which a medical image data set may be retrieved; a user input device configured to receive user input to positively and negatively select voxels that are and are not of a tissue type of interest; and a processor configured to determine a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and to classify further voxels in the medical image data set on the basis of the distinguishing function.
  • FIG. 1 shows a generic computer tomography scanner for generating a 3D data set
  • FIG. 2 a shows a 2D projection of a 3D data set with tissue opacity values being represented by a linear gray-scale
  • FIG. 2 b schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 a;
  • FIG. 2 c shows a 2D projection of a 3D data set with ranges of tissue opacity values being represented by ranges of a gray-scale defined by presets;
  • FIG. 2 d schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 c;
  • FIG. 2 e shows a 2D projection of a 3D data set with ranges of tissue opacity values being represented by ranges of colors defined by presets;
  • FIG. 3 shows a histogram of data values within a volume of interest (VOI) within a 3D data set
  • FIG. 4 shows a flow chart of an automatic preset determination method according to an embodiment of the invention
  • FIG. 5 a shows a histogram of data values within a VOI within a 3D data set and to which a first convex hull has been applied to determine a first visualization threshold
  • FIG. 5 b shows a histogram of data values within a VOI within a 3D data set and to which a second convex hull has been applied to determine a second visualization threshold
  • FIG. 5 c shows a histogram of data values within a VOI within a 3D data set and to which a third convex hull has been applied to determine a third visualization threshold
  • FIG. 5 d shows a histogram of data values within a VOI within a 3D data set and to which a fourth convex hull has been applied to determine a fourth visualization threshold
  • FIG. 6 shows a computer system for storing, processing and displaying medical image data
  • FIG. 7 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a first example of the invention, referred to as “Active MR”;
  • FIG. 7 b shows an example image displayed according to the automatic preset of FIG. 7 a
  • FIG. 8 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a second example of the invention, referred to as “Active Bone (CT)”;
  • CT Active Bone
  • FIG. 8 b shows an example image displayed according to the automatic preset of FIG. 8 a
  • FIG. 9 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a third example of the invention, referred to as “Active Angio (CT)”;
  • CT Active Angio
  • FIG. 9 b shows an example image displayed according to the automatic preset of FIG. 9 a
  • FIG. 10 schematically shows an example display of an image and associated section views which a user may employ to identify a tissue type of interest
  • FIG. 11 is a flow chart schematically showing a method for classifying whether voxels in a volume data set belong to a tissue type of interest according to an embodiment of the invention.
  • FIGS. 12A-12D schematically show the distribution of a number of different characterizing parameters computed for example voxels identified by a user as belonging to different tissue types.
  • FIG. 1 is a schematic perspective view of a generic CT scanner 2 for obtaining a 3D scan of a region of a patient 4 .
  • An anatomical feature of interest (in this case a head) is placed within a circular opening 6 of the CT scanner 2 and a series of X-ray exposures is taken.
  • Raw image data is derived from the CT scanner and could comprise a collection of one hundred 2D 512*512 data subsets, for example.
  • These data subsets, each representing an X-ray image of the region of the patient being studied, are subject to image processing in accordance with known techniques to produce a 3D representation of the feature imaged such that various user-selected 2D projections of the 3D representation can be displayed (typically on a computer monitor).
  • the techniques for generating such 3D representations of structures from collections of 2D data subsets are known and will not be described further herein.
  • FIGS. 2 a , 2 c and 2 e show example 2D images of the same projection from a 3D CT data set but with different presets.
  • FIGS. 2 b and 2 d show graphical representations of the color and opacity curve mappings used in generating the 2D images shown in FIG. 2 a and 2 c respectively.
  • FIGS. 2 a - e are included to illustrate the effect of presets on such images before describing how presets are implemented in specific embodiments of the invention.
  • FIG. 2 a shows an example 2D image which is a projection of a 3D data set obtained from a CT scanner.
  • a VOI within the 3D data set has been selected for display.
  • the material surrounding the VOI is not rendered in the projection.
  • the image is displayed with a uniform gray-scale ranging from black to white.
  • FIG. 2 b schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 a .
  • FIG. 2 b includes a plot of a binned frequency distribution of the signal values in the VOI with a superposed line plot of opacity as a function of signal value (red curve). The shading of the area under the binned frequency distribution plot indicates the mapping of colors to signal value in the rendering.
  • tissue types there are three tissue types in the projected image. These are a region of bone 26 , a region of soft tissue 30 and a barely visible network of blood vessels 28 .
  • the high X-ray stopping power of bone compared to that of blood and soft tissue, makes the region of bone 26 easily identifiable in the image due to the high opacity associated with the associated voxels.
  • the opacities of blood and soft tissue are more similar, they are not as clearly distinguished. In particular, it is difficult to see the blood vessel network.
  • FIG. 2 c shows a 2D image of the same projection as FIG. 2 a , but rendered with a different color and opacity mapping, i.e. a different preset.
  • FIG. 2 d schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 c .
  • FIG. 2 d includes a plot of a binned frequency distribution of the signal values in the VOI and a superposed line plot of opacity as a function of signal value (red curve). The shading of the area under the binned frequency distribution plot again indicates the mapping of colors to signal values in the rendering.
  • FIG. 2 c signal values indicative of soft tissue voxels have been colored significantly differently from signal values indicative of blood vessel voxels. This makes the interpretation of the blood vessels much clearer. In practice, however, a wider range of colors will be available than can be reliably shown in a black-and-white figure such as FIG. 2 c , and the three tissue types could be shown in distinctly different colors.
  • FIG. 2 e shows a 2D image of the same projection as FIGS. 2 a and 2 b , but rendered with a different color and opacity mapping, i.e. a different preset.
  • FIG. 2 e shows the signal values of the voxels within the VOI (and hence the corresponding portions in the projected image) as different colors. Voxels associated with the blood vessel network are shaded yellow, those of the soft tissue are allocated shades of transparent red and those of the bone are allocated shades of cream.
  • the range of displayable colors and opacities to which the voxel signal values are to be mapped i.e. the entries in the color table
  • images might be represented using five color ranges with the color ranges being black, red, orange, yellow and white.
  • the opacity level from black to white might range from 0% to 100% opacity with the colors mixing in relation to the opacity curve. This would allow for a smooth transition between bands, with the color and opacity values at the upper edge or boundary of one range matching the color and opacity values at the lower edge of boundary of the adjacent range. In this way, the five ranges blend together at their boundaries to form a smoothly varying and continuous spectrum.
  • the colors at the bottom boundary of the first range and the top boundary of the fifth range are black and white respectively.
  • Each of these five color ranges can be mapped to the voxel signal values which represent different tissue types to distinctly identify the different tissues. For example, bone might be represented in shades of cream, blood in shades of yellow, a kidney in shades of orange and so on, all with varying opacities.
  • the task of attributing color and opacity to different tissue types becomes one of determining suitable signal values, hereafter referred to as visualization thresholds, for defining boundaries between the ranges.
  • Sub-ranges of signal values between color boundaries may be taken to represent different tissue types.
  • the shades of color available within the color range associated with each tissue type can then be appropriately mapped to the sub-range of signal values. For instance, in one example there are 32 shades of cream available for coloring bone, and bone is associated with signal values between 154 and 217 (in arbitrary units).
  • a color look-up table could then associate the first shade of cream with signal values 154 and 155, the second shade of cream with signal values 156 and 157, the third with 158 and 159 and so on through to associating the thirty-second shade of cream with signal values 216 and 217.
  • FIG. 3 is a histogram which schematically shows the binned frequency distribution F of an example set of voxels within a selected VOI as a function of signal value D.
  • the signal values D may be in arbitrary units (typical for MR) or calibrated units (such as Hounsfield units (HU) that are used for CT and other types of X-ray imaging).
  • the histogram shown in FIG. 3 represents the voxels within a VOI from which a 2D projected image (such as those shown in FIGS. 2 a , 2 c and 2 e ) can be derived and the signal values represent X-ray attenuation calibrated in HUs.
  • the signal values D of the voxels within the selected VOI are distributed between a minimum value S and a maximum value E.
  • a minimum value S a minimum value
  • E a maximum value
  • four distinct voxel value sub-ranges are evident.
  • a first voxel value sub-range I has a narrow peak at relatively high signal values
  • a second voxel value sub-range II has a relatively broad peak
  • a third voxel value sub-range III has a shoulder on the lower signal value side of the second voxel value sub-range II
  • a fourth voxel value sub-range IV has a shoulder on the lower signal value side of the third voxel value sub-range III.
  • sub-range I-IV identified in the histogram are likely to relate to different tissue types in the 3D data set and so would benefit from being displayed with different color ranges. For example, one might reasonably infer that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub-range III corresponds to soft tissue and sub-range IV represents the background tissue type or air.
  • FIG. 4 is a flow chart showing the steps involved in determining color boundaries for allocating displayable colors based on a numerical analysis of signal values.
  • a 3D data set of signal values is provided.
  • the 3D data set in this example is provided by an MR scanner and is calibrated in arbitrary units. However, any 3D data set could equally well be used.
  • a VOI within the data set is selected. This step of selecting a VOI within may be performed manually or automatically, for instance using a connectivity algorithm.
  • a histogram of the signal values of the voxels within the VOI is generated, such as the one shown in FIG. 3 , for example.
  • a set of extreme signal values are identified. Exclusion of this set of extreme signal values from subsequent steps of the determination of color boundaries helps to avoid any undesirable skewing of the results by signal values which are not considered statistically significant. Such extreme values might be caused by highly attenuating medical implants (such as screws) or defects in the image data set, for example. By ignoring a fraction of the highest and lowest signal values, such as the extreme 0.1% of voxels at each end of the range, any of these extreme outlier voxels within the VOI will not unduly skew the results of the numerical analysis. However, if there are no extreme outlier voxels, the numerical analysis will not be unduly affected by ignoring a relatively small fraction of the histogram.
  • the histogram is effectively considered to run between signal values L and U, where signal values between S and L, and between U and E are those which are excluded. Whereas in this example a default fraction of extreme voxels are excluded, the fraction discarded could also depend on characteristics of the data set and/or a subject being studied.
  • an iteration parameter n is given an initial value of 1.
  • the iteration parameter is indicative of how many visualization thresholds for determining visualization parameters have already being determined, at this stage of the flow chart, a value of n means that n ⁇ 1 visualization thresholds have previously been found.
  • an n th convex hull of the histogram is determined.
  • the n th convex hull is defined to be a curve spanning the signal value range L to U and which comprises that series of line segments which combine to form the shortest possible single curve drawn between the histogram values at L and U subject to the condition that at any given signal value within the range, the curve must be greater than or equal to the value of the histogram, and further subject to the condition that the n th convex hull must also meet the histogram profile at all previously identified color boundaries.
  • the first convex hull contains two extended straight line portions marked b and a. These are the distinct and continuous sections of the first convex hull which deviate from following the histogram profile by “cutting comers” and thereby minimizing the integrated length of the convex hull.
  • an n th visualization threshold T n is found by determining the point on the histogram profile for which the n th convex hull has the maximum nearest distance. This is the point on the histogram from which the longest possible line can be drawn to meet the n th convex hull perpendicularly. This point, which must intersect the n th convex hull on one of its extended straight line portions, can be determined, to a finite accuracy, by a number of known techniques.
  • the longest perpendicular which can be drawn between the first convex hull and the histogram profile shown in FIG. 5 a connects to the straight line portion marked b and is indicated in the figure by the dotted-line marked c.
  • This line intersects the histogram profile at signal value T 1 and so defines a first visualization threshold of T 1 .
  • the first visualization threshold T 1 divides the histogram into two ranges, one running between signal values L and T 1 and one running between signal values T 1 and U. It is apparent from FIG. 5 a that the range between signal values T 1 and U corresponds closely with the first voxel value sub-range I identified in the histogram indicated in FIG. 3 .
  • a significance parameter for the n th visualization threshold T n is determined. The determination of the significance parameter is discussed further below.
  • one or more color boundaries are set based on the n th visualization threshold T n .
  • a single color boundary threshold is set at the data value matching the visualization threshold T n .
  • a sharpness value is set for each of the one or more color boundaries associated with the n th visualization threshold T n .
  • the sharpness value may be based on the significance parameter of n th visualization threshold T n and can be used to assist in displaying images.
  • the sharpness value may, for example, range from 0 to 100 and be used to determine a level of color blending between the colors near to a color boundary. Increased color blending can be set to occur at boundaries with relatively low sharpness values. This ensures that low significance boundaries appear less harsh in a displayed image. Conversely, little or no color blending is applied to boundaries with relatively high sharpness values. This ensures high significance boundaries appear well defined in a displayed image.
  • sharpness values can also be taken as a measure of the significance of a visualization threshold and associated boundaries.
  • the significance of a boundary may play a role in further processing as discussed further below.
  • n the total number of visualization thresholds required.
  • the flow chart follows the N-branch from step 41 to step 42 where the preset determination is complete. If further color boundaries are required (i.e. n ⁇ N total ), the flow chart follows the Y-branch from step 41 to a step 42 where the iteration parameter n is incremented, and then returns to step 36 to continue as described above.
  • the second convex hull contains four extended straight line portions which are marked f, e, d and a in the figure.
  • the point on the histogram from which the longest possible line can be drawn to meet the second convex hull perpendicularly is determined.
  • the longest perpendicular which can be drawn between the second convex hull and the histogram profile shown in FIG. 5 b connects to the straight line portion marked f and is indicated in the figure by the dotted-line marked g.
  • This line intersects the histogram profile at signal value T 2 which combines with T 1 to divide the histogram into three ranges, one running between signal values L and T 2 , one running between signal values T 2 and T 1 and, as before, one running between signal values T 1 and U.
  • the range between signal values T 2 and T 1 corresponds closely with the second voxel value sub-range II identified in the histogram indicated in FIG. 3 .
  • the setting of color boundaries (and their associated sharpness) which are linked to the second visualization threshold T 2 will be understood from the above.
  • the convex hull in FIG. 5 c passes through the histogram values at both T 1 and T 2 as well as the lower and upper end-points L and U and contains five extended straight line portions marked j, h, e, d, and a.
  • the longest perpendicular which can be drawn between the third convex hull and the histogram profile connects to the straight line portion marked j and is indicated in the figure by the dotted-line marked k.
  • This line intersects the histogram profile at signal value T 3 which combines with T 1 and T 2 to divide the histogram into four ranges, one running between signal values L and T 3 , one running between signal values T 3 and T 2 , one running between signal values T 2 and T 1 and one running between signal values T 1 and U. It is apparent from FIG. 5 c that the ranges between signal values L and T 3 , and T 3 and T 2 corresponds closely with the fourth and third voxel value sub-ranges IV, III respectively identified in the histogram indicated in FIG. 3 .
  • the iterative determination of visualization thresholds continues until a pre-set maximum number of associated boundaries have been determined, e.g. if there are five color ranges available for display then four color boundaries are required to define the associated five voxel value sub-ranges within the histogram.
  • the color mappings for displaying images derived from the VOI on which the histogram analysis has been performed can be generated by associating the shades available within each color range with the signal values defined by the color boundaries.
  • the method outlined above automatically identifies P voxel value sub-ranges from the histogram of the voxels contained in the VOI. For a given type of data, an appropriate value of P (and hence number of visualization thresholds to be identified) can be selected based on the expected characteristics of the data set.
  • the allotment of P color ranges will cause more than one color range to be allotted to at least one of the tissue types.
  • a color boundary which is placed within a signal value range representing a single tissue type may appear confusing in the display, especially if the user is unaware of it.
  • color blending based on a sharpness value derived from the significance parameter for each visualization threshold can be used to de-emphasize such boundaries.
  • the significance parameter may also form the basis of a significance test for determining the significance of a visualization threshold.
  • the significance parameter may, for example, derive from the length of the determined longest perpendicular between the n th convex hull and the histogram.
  • the significance test may require that this longest perpendicular is at least a pre-defined fraction of the histogram's characteristic dimensions. For instance, the significance test may require that the longest perpendicular be at least 5% of the geometric height or width of the histogram. In another example, the significance test may require that the longest perpendicular be at least 10% of the value of the appropriately normalized height and width of the histogram added in quadrature. The height and width may be differently normalized to provide different weighting.
  • a default fraction such as 5% or 10%
  • the fraction may also be changed to better suit a particular application and expected histogram characteristics.
  • a determined visualization threshold which lies within a signal value range representing a single tissue type will fail an appropriately configured significance test. Boundaries associated with these visualization thresholds will be noted as cosmetic boundaries and will be ignored by selection tools.
  • the fourth convex hull meets the histogram at signal values T 1 , T 2 and T 3 .
  • the three previously identified visualization thresholds, and hence associated color boundaries define four voxel value sub-ranges in the histogram.
  • the histogram example shown in FIG. 3 corresponds to a VOI containing four distinct tissue types and accordingly there are no more significant visualization thresholds to be identified.
  • a visualization threshold is deemed not to be significant, for example with reference to a model of a histogram's expected characteristics or by comparison with a pre-determined minimum length of longest perpendicular, it is can either be noted as such to provide a cosmetic color boundary in the color table (and given an appropriate sharpness value of, for example, 0), or it can be discarded.
  • the concept of a cosmetic boundary is a boundary which is defined in the color table and thus relevant for display purposes, but which is ignored by other tools in the graphics system which are boundary sensitive, such as tools used for selecting objects that contain algorithms that automatically search for and mark boundaries.
  • One or more cosmetic boundaries may be determined iteratively in the manner described above until sufficient number are determined to satisfy the requirements of the number of available displayable color ranges (i.e. j ⁇ 1 color boundaries for j displayable color ranges).
  • the iterative search for further visualization thresholds may cease after the first significance-test-failing visualization threshold is identified.
  • Individual color ranges may be allotted to the individual signal value ranges defined by the identified color boundaries and the remaining color ranges unused, or cosmetic boundaries can be artificially defined.
  • a cosmetic boundary could be generated by defining a color boundary mid-way between the two signal values of the most widely separated significant color boundaries. If multiple cosmetic boundaries are to be defined, the signal values with which to associate them can be determined serially, i.e. one after another using the above criterion, or in parallel such that they collectively divide the widest signal value range in to equal sections.
  • the significance of a given color boundary may be a continuous parameter and need not be a purely binary—e.g. a color boundary need not simply be significant or non-significant (i.e. non-cosmetic or cosmetic).
  • sharpness values derived form a significance parameter may be used as a direct measure of a boundary's significance such that in many cases the sharpness value itself will be used to directly indicate a boundary's significance.
  • significance may be given an insignificant level (e.g. 0) and one or more levels of significance (e.g. integers between 1 and 100). This facility may be used by other graphics tools within the image processing system, for example to determine the probability of a boundary being significant when attempting to resolve conflicts when determining the bounds of a topological entity.
  • a zero level of sharpness i.e. significance is set for boundaries which fail the significance test.
  • the visualization threshold's significance could be based on the length of the appropriately normalized length of the perpendicular between the n th convex hull and the histogram.
  • the significance of each of the color boundaries may also play a role in the appropriate use of connectivity algorithms used to define surfaces or volumes within the 3D data set which are associated with features identified by a user in the projected image.
  • FIG. 6 schematically illustrates a general purpose computer 132 of the type that may be used to perform processing in accordance with the above described techniques.
  • the computer 132 includes a central processing unit 134 , a read only memory 136 , a random access memory 138 , a hard disk drive 140 , a display driver 142 and display 144 and a user input/output circuit 146 with a keyboard 148 and mouse 150 all connected via a common bus 152 .
  • the central processing unit 134 may execute program instructions stored within the ROM 136 , the RAM 138 or the hard disk drive 140 to carry out processing of signal values that may be stored within the RAM 138 or the hard disk drive 140 .
  • Signal values may represent the image data described above and the processing may carry out the steps described above and illustrated in FIG.
  • the program may be written in a wide variety of different programming languages.
  • the computer program itself may be stored and distributed on a recording medium, such as a compact disc, or may be downloaded over a network link (not illustrated).
  • the general purpose computer 132 when operating under control of an appropriate computer program effectively forms an apparatus for processing image data in accordance with the above described technique.
  • the general purpose computer 132 also performs the method as described above and operates using a computer program product having appropriate code portions (logic) for controlling the processing as described above.
  • the image data could take a variety of forms, but the technique is particularly well suited to embodiments in which the image data comprises a collection of 2D images resulting from CT scanning, MRI scanning, ultrasound scanning or PET that are combined to synthesize a 3D object using known techniques.
  • the aided visualization of distinct features within such images can be of significant benefit in the interpretation of those images when they are subsequently projected into 2D representations along arbitrarily selected directions that allow a user to view the synthesized 3D object from any particular angle they choose.
  • presets may be required to satisfy further conditions before they are accepted for defining color range boundaries (or other visualization parameters).
  • preset thresholds should not be defined between those signal values representing soft tissue and blood vessels since both of these should be made transparent in the displayed image. Instead, less significant but more appropriate thresholds within the range of signal values representing bone may be preferred.
  • Magnetic resonance imaging (MR) data sets are generally un-calibrated and display a wide range of data-values, dependent on, for example, acquisition parameter values or the position of the VOI with respect to a scanner's detector coils during scanning. Accordingly, it is not usually possible to pre-estimate suitable signal values with which to attribute color range presets and this makes the present invention especially useful for application to MR data sets.
  • MR Magnetic resonance imaging
  • FIG. 7 a schematically shows the appearance of a visualization state tool displayed on the display 144 shown in FIG. 6 .
  • the display tool shows a user the outcome of a preset determination method for a selected VOI.
  • the visualization state tool comprises a data display window 80 , a color display bar 72 , a display of opacity values 74 , a display of boundary positions 78 , a display of sharpness values 76 and a number of display modification buttons 82 .
  • the color display bar identifies the five color ranges available for display in this application, although no details of the available shades within each range are shown.
  • the display of boundary positions 78 shows the signal values of the four determined color boundaries.
  • the individual display windows for each of these four boundary positions are centered beneath the two color ranges indicated on the color display bar 72 with which each is associated.
  • the display of sharpness values 76 can be used to determine the significance of the four determined boundaries.
  • the display of opacity values 74 at each of the boundary positions (and at the maximum signal value) is also shown (with example values 0 , 0 , 73 , 90 , 100 ).
  • the data display window shows a logarithmically scaled histogram of the signal values for the entire 3D data set overlaid with dashed and solid vertical lines marking the cosmetic and significant boundary positions respectively.
  • the boundary lines may also be marked differently, for example based upon their relative significance.
  • the histogram is colored to represent the color table and opacity mapping at each signal value indicated by the scale along the top of the data display window.
  • the frequency distribution shown here is that of the entire image data set and is not restricted to the VOI on which the preset determination is based. In some cases it may aid a user's interpretation if the histogram of the selected VOI is shown.
  • a curve is plotted overlaying the frequency distribution which shows the opacity curve.
  • the opacity curve is formed by interpolation between the opacity values set for each of the color boundary positions taking into account the sharpness at each boundary, in this example 0 , 0 , 73 , 90 and 100 for voxel values 11 , 19 , 102 , 158 and the maximum voxel value respectively.
  • the display modification buttons 82 allow a user to pan along the histogram shown in the display window and also allows for repositioning color boundaries if required.
  • the preset determination method might be configured to automatically assume that it is the tissue/air interface which is required to be visualized. Accordingly, after determining the presets, the voxels containing signal values below the lowest significant threshold will be assumed to represent air and be made transparent in the projection. If the VOI is selected such that only a small proportion of air is included, the preset determination method will look for a higher threshold of apparent significance to determine which voxels should be made transparent. As noted above, the intensities in MR vary substantially across the imaged volume due to coil positioning and other factors, so the smaller the VOI then the more accurate/useful the resulting visualization is likely to be.
  • the active preset determination method of this example When applied to an MR data set, the active preset determination method of this example first tries to find a candidate threshold, using the technique described above, and which further satisfies the condition that 60% (+/ ⁇ 30%) of the volume is transparent.
  • a suitable threshold is found to be at signal value 15 .
  • the design of the “Color/Opacity Settings” interpolation used by the particular display software used in this example operates best if two boundaries are placed a small distance on either side of the computed background threshold in order to provide a rapid rise in the opacity curve as usually desired.
  • these two boundaries are placed at positions ⁇ 4 signal value units either side of the visualization threshold at signal values of 11 and 19 respectively.
  • the boundaries may also be placed at other positions, for example, at positions ⁇ 3 signal value units either side of the visualization threshold.
  • the method looks for up to two more candidate visualization thresholds above the background level, at which to place the two remaining color boundaries. If two significant visualization thresholds cannot be found, then the missing color boundaries are placed in the center of the largest gap, but, as noted above, the associated sharpness (i.e. significance) is set to 0, indicating a cosmetic boundary with no significance to selection.
  • a second visualization threshold at signal value position 102 is determined to be significant and a single color boundary with a sharpness set to 5 is defined at signal value position 102 . No further significant visualization thresholds are found and the remaining color boundary, between yellow and white, is placed at signal value position 158 . It should be noted that, whilst not immediately apparent from the histogram shown in the data display window 80 , this cosmetic boundary is in the middle of the widest gap between significant color boundaries. There are two reasons why it is perhaps not immediately apparent. Firstly, the histogram upon which the analysis is made differs from the histogram shown in the data display window since the former is restricted to the VOI and un-sculpted domain whereas the latter represents the entire 3D data set.
  • the lowest and highest 0.1% of voxels are excluded from the numerical analysis. Because the histogram shown in the dialog has a logarithmic vertical scale the voxels at the extremes of the voxel value range can appear more significant.
  • FIG. 7 b shows an example image displayed according to the automatic preset of FIG. 7 a .
  • the air/tissue interface is shown with regions of skin, bone and soft tissue are apparent in the image.
  • the same preset type is used as in the first example. This may be useful for CT data sets in which a user wants to visualize soft tissue.
  • This example is for use on CT data sets for the purpose of visualizing and selecting bone.
  • FIG. 8 a schematically shows the appearance of a visualization state tool presenting an example of use of the “Active Bone (CT)” Preset.
  • CT Active Bone
  • the “Active Bone (CT) Preset” operates by determining a first significant visualization threshold within the signal value range 70 HU to 270 HU.
  • a visualization threshold value of 182 HU is determined. If no such visualization threshold is found then 170 HU is used.
  • This first visualization threshold is used to set the background level in the display software by setting two boundary positions at ⁇ 45 HU from the first visualization threshold. The boundary positions in this example are accordingly at 137 HU and 227 HU. With the five available ranges of color indicated in FIG. 8 a there are two remaining presets to determine.
  • One of these is placed at ⁇ 500 HU (which, in the “Active Bone” scheme, is denoted as a significant boundary with a sharpness of 5 to show some information about soft tissue in the side multi-planar reconstruction (MPR) views.
  • a fourth color boundary is placed at 600 HU to give some intensity information.
  • the fourth color boundary is ascribed a sharpness value of 0 to denote a cosmetic boundary.
  • FIG. 8 b shows an example image displayed according to the automatic preset of FIG. 8 a . Regions of bone are most apparent in the image.
  • CT Active Angio
  • This preset assumes the data are in correctly calibrated Hounsfield units.
  • the purpose of this preset is to visualize angio-tissue.
  • FIG. 9 a schematically shows the appearance of a visualization state tool presenting an example of use of the “Active Angio (CT)” Preset.
  • CT Active Angio
  • the yellow/white boundary is placed at a position determined by the histogram analysis within the range 550 HU ⁇ 200 HU, and this boundary is given a sharpness of 5 so that is significant to selection. In the example a boundary position of 550 HU is determined.
  • the selection tools can be used in conjunction with this preset to discriminate bone from contrast enhanced vasculature.
  • the other boundary positions are fixed at values ⁇ 500 HU, 105 HU and 195 HU.
  • FIG. 9 b shows an example image displayed according to the automatic preset of FIG. 9 a . Regions of bone and angio-tissue are most apparent in the image.
  • the preset determination method of the present invention can be specifically tailored in any number of ways to apply to data sets with known specific characteristics.
  • the method can also be used as an entirely general tool with no prior knowledge of the data set, such as in the MR example described above.
  • users may themselves customize the preset determination to suit the requirements of a particular study. This might be done, for example, where CT calibrated data are used and a user requires features with a particular X-ray attenuation to be identified. This might also be done to distinguish between two tissue types of similar X-ray attenuations.
  • a user might modify the preset determination method based on the appearance of a single 2D projection so that the preset is applied consistently to all 2D projections generated from that voxel data set.
  • parameters associated with a user's personal customizations to the method such as the significance test stringency, the typical fraction of the data set the user wants to appear as transparent, or specific signal value ranges in which thresholds should occur
  • the modified method could be consistently applied to further data sets.
  • determined visualization thresholds are equally suitable for defining boundaries for visualization parameters other than color, such as opacity, that are relevant for rendering. Furthermore, it is often clinically useful for the placement of boundaries in an opacity mapping to be positioned at the same signal values as the boundaries in a color mapping.
  • Other visualization parameters for which the invention could be used include rate of change of color with signal value, rate of change of opacity with signal value, and segmentation information.
  • the preset determination is also not limited to finding any particular number of boundaries, and the associated number of visualization thresholds, but is extendable to determining any number of boundaries or visualization thresholds. In some applications it may be appropriate to determine more visualization thresholds than there are distinct boundaries required to allow less significant thresholds to play a role in defining the specific allocation of available color shades or transitions between colors within one or more of the determined ranges, for example.
  • the visualization parameter boundaries are determined automatically, in other cases, some level of user input can assist in determining the most appropriate conditions for displaying an image. This is because once an automatic preset has been determined it may be desirable to make an assumption regarding what aspects of the data a user is interested in seeing in a displayed image. For example, in the histogram of CT data shown in FIG. 3 , four tissue types are identified. As previously noted, it might reasonably be inferred that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub-range III corresponds to soft tissue and sub-range IV represents the background tissue type or air.
  • an assumption might be made that the user does not wish to view the data corresponding to sub-range IV (background tissue type and air) and so voxels corresponding to this region will be rendered transparent.
  • the displayed image will then show the bone, blood and soft tissue.
  • a user may be interested in viewing bone and blood only with soft tissue rendered transparent.
  • the user might wish to view the background tissue and so it should not be rendered transparent.
  • the user may be invited to identify in a displayed 2D image one or more examples of areas which are of interest and should be rendered visible, and one or more examples of areas which are not of interest and which should be rendered transparent.
  • the user might identify such example areas by moving a cursor to appropriate parts of a displayed image and selecting the examples by “clicking” with a mouse-like pointer, for example. Once the example areas have been identified, it is possible to determine which sub-ranges they fall within and so set appropriate display conditions for these sub-ranges (e.g. transparent or not-transparent).
  • tissue types which are and which are not of interest to assist in displaying images in conjunction with the above described automatic preset determination
  • techniques can also be applied more generally to classify different tissue types in medical image volume data.
  • the technique can be particularly useful where different tissue types appear very similar in the data, for example because they have similar X-ray stopping powers for CT data.
  • some of the sub-ranges may contain two subtly different tissue types, for example, sub-range I may include distinct regions of bone having subtly different densities from each other.
  • Another example is identification of tumors in organs such as the liver or brain. It can be difficult to properly classify voxels in the volume data which correspond with these different tissue types due to the similarity in the signal values associated with them.
  • FIG. 10 shows an example screen shot of a display 101 of a 2-D image generated from a volume (i.e. 3-D) data set.
  • a main image 100 displays a 2-D image rendered from the volume data.
  • the main image 100 shown in the figure includes a partial wire-frame cuboid to assist a user in interpreting the orientation of the image with respect to the original volume data, and some basic textual information, such as the date and time.
  • the display 101 also contains a sagittal section view 102 , a coronal section view 104 , and a transverse section view 106 of the volume data to assist in diagnostic interpretation.
  • a number of different tissue types for example corresponding to bone and brain, are seen in the image.
  • the top portion of the skull has been sculpted away (i.e. rendered transparent) so that the underlying brain can be seen.
  • a user viewing the display shown in FIG. 10 may wish to sculpt away further material so that a particular tissue type of interest within the brain can be viewed.
  • the tissue type of interest might correspond to a feature the user has observed in one of the section views 102 , 104 , 106 displayed on the left of the display and wishes to examine further.
  • voxel values associated with voxels corresponding to different types of tissue for example as seen for bone and soft tissue in a CT scan, it can be relatively easy to classify the voxels.
  • segmentation algorithms can often fail to properly classify voxels corresponding to the different tissue types. If segmentation is performed on the basis of voxel values expected for voxels corresponding to the tissue type of interest, a carefully selected window of values needs to be defined. Voxels having values falling within the window are considered to correspond to the tissue type of interest, voxels having voxel values falling outside of the window are considered not to correspond to the tissue type of interest.
  • FIG. 11 is a flow chart schematically showing a method of identifying voxels in a medical image data set which correspond to a tissue type of interest according to an embodiment of the invention. It will be assumed by way of example that the method is executed in response to a user having being presented with the image shown in FIG. 11 identifying in the sagittal section view 102 an anomalous region of brain which appears slightly different to surroundings tissue and which he wants to examine further.
  • the method is performed by a suitably programmed general purpose computer, such as that shown in FIG. 6 .
  • the computer may be a stand-alone machine or may form part of a network, for example, a Picture Archiving and Communication System (PACS) network.
  • PACS Picture Archiving and Communication System
  • Step 111 of FIG. 11 input is received from the user which identifies (selects) voxels corresponding to the tissue type of interest.
  • this is conveniently performed by the user positioning a cursor (“pointer”) displayed on the screen 144 displaying the image 101 over a pixel corresponding to the tissue type of interest in one of the section views 102 , 104 , 106 , the cursor being positioned by manipulation of the mouse 150 .
  • cursor displayed on the screen 144 displaying the image 101 over a pixel corresponding to the tissue type of interest in one of the section views 102 , 104 , 106 , the cursor being positioned by manipulation of the mouse 150 .
  • other input means such as a light-pen, graphics tablet or track ball, for example, may equally be used to point to the tissue type of interest.
  • the user Since in this example the user initially noticed the region he wishes to examiner further in the sagittal section view 102 , it is assumed he positions the cursor over a pixel within the anomalous region in this view. If the region is also apparent in either of the other section views 104 , 106 , he may equally position the cursor over an appropriate pixel in those views. Once the cursor is positioned over a desired pixel, the user indicates his selection by pressing (“clicking”) a button on the mouse 150 . Any other input means could equally be used. A voxel in the volume data corresponding to the selected pixel is then determined based on the plane of the section view within the volume data and the selected position within the section view.
  • the selected pixel might span a number of voxels in the volume data.
  • the voxel in which the selected pixel is situated is taken as the identified voxel.
  • all of the voxels within a region of a predetermined size and shape surrounding a central selected voxel might be considered as being identified as corresponding to the tissue type of interest.
  • the user may identify any number of further voxels by clicking elsewhere in the sagittal or other section views.
  • the user may change the particular displayed sagittal, coronal and/or transverse section views to allow for voxels identifying the tissue type of interest to be selected from anywhere within the volume data.
  • voxels corresponding to the tissue type of interest might be identified, though fewer or more may be preferred. These voxels will be referred to as positively selected voxels and the process of identifying them will be referred to as making a positive selection.
  • a range of pixels could be identified by a user “clicking” twice to identify opposite corners of a rectangle, or a centre and circumference point of circle, or by defining a shape in some other way. Voxels corresponding to pixels within the perimeter of the shape may then all be deemed to have been identified.
  • Step 112 input is received from the user which identifies (selects) voxels not corresponding to the tissue type of interest.
  • Step 112 may be performed in a manner which is similar to Step 111 described above, but in which the user positions the cursor over pixels in the sagittal, coronal and/or transverse sections which do not correspond to the tissue type of interest.
  • the user may indicate his selection by “clicking” a different mouse button to that used to identify the positively selected voxels.
  • the same mouse button might be used in parallel with the pressing of a key on the keyboard 148 .
  • the user should identify voxels which are most similar to the tissue type of interest, but which he wants to exclude nonetheless. This is because voxels which differ more significantly from voxels corresponding to the tissue type of interest are easier to classify as not being of interest.
  • the tissue type of interest is an anomalous region of brain which appears slightly different from its surroundings in the sagittal section view 102
  • the user should identify voxels by selecting pixels in the area surrounding the anomalous region. However, if there are other regions which also appear similar to the tissue type of interest, but which are not necessarily in close proximity to it, the user may also identify some voxels corresponding to these regions.
  • voxels not corresponding to the tissue type of interest might be identified. However, as few as one or many more than five may also be chosen. For example, if there are a number of regions in the data appearing only slightly different from the tissue type of interest, the user may choose to identify a number of voxels in each of these regions.
  • the voxels identified in Step 112 will be referred to as negatively selected voxels, and the process of identifying them will be referred to as making a negative selection.
  • Step 113 one or more characterizing parameters are computed for each of the voxels selected in Steps 111 and 112 .
  • four characterizing parameters namely voxel value V, a local average A, a local standard deviation ⁇ and maximum Sobel edge filter response S over all orientations, are determined for each voxel.
  • maximum Sobel edge filter response instead of maximum Sobel edge filter response, gradient magnitude is used.
  • the local average and standard deviation are computed for a 5 ⁇ 5 ⁇ 5 cube of voxels centered on the particular voxel at hand.
  • regions may also be used. For example, a smaller regions may be considered for faster performance.
  • the regions need not be three-dimensional, a 5 ⁇ 5 square of voxels, or other region, in an arbitrarily chosen or pre-determined plane may equally be used.
  • Step 114 the distribution of computed characterizing parameters are analyzed to determine which of them may be used to distinguish between the positively selected and the negatively selected voxels.
  • FIGS. 12A-12D show example distributions of voxel value V, local average A, local standard deviation ⁇ and maximum Sobel edge filter response S respectively for five positively selected and five negatively selected voxels.
  • the values for the positively selected voxels are marked by “plus” symbols above the horizontal line representing the range of values of the particular characterizing parameter at appropriate positions along the line.
  • the values for the negatively selected voxels are similarly represented by “minus” symbols below the line.
  • the local averages A are also broadly similar for both the positively and negatively selected voxels. There appears to be a slight bias towards higher values of local average for positively selected voxels, but there is still a large degree of overlap.
  • the computed local standard deviations ⁇ are significantly different for the positively and negatively selected voxels.
  • the regions surrounding the positively selected voxels tend to have significantly larger standard deviations than those surrounding the negatively selected voxels. This indicates that the positively selected voxels from the region of tissue type which the user wishes to examine further correspond to regions of greater granularity in the data. It is likely to be this greater degree of granularity which causes the region to appear to human visual perception to be slightly different to the surrounding regions in the section views.
  • local standard deviation ⁇ is a characterizing parameter which distinguishes well between positively and negatively selected voxels, and as such is considered to be a distinguishing parameter.
  • only one distinguishing parameter is sought and is chosen on the basis of it being the most able of the computed characterizing parameters to discriminate between the positively and negatively selected voxels.
  • the ability of a given characterizing parameter to discriminate is referred to as its discrimination power and may be parameterized using conventional statistical analysis. In this example, this is done by separately calculating the average and the standard deviation of each characterizing parameter for the positively and the negatively selected voxels.
  • the discriminating power of a given characterizing parameter is then taken to be the difference in the average for the positively and negatively selected voxels divided by the quadrature sum of their standard deviations.
  • the charactering parameter having the greatest discriminating power is then taken to be the distinguishing parameter.
  • multiple distinguishing parameters may be used, for example all characterizing parameters having a discriminating power greater than a certain level or a fixed number of characterizing parameters having the highest discriminating powers may be used.
  • the distinguishing parameter i.e. local standard deviation ⁇ in this case
  • the distinguishing parameter is calculated for other voxels in the data.
  • a conventional segmentation algorithm may first be applied to the data to identify which voxels belong to significantly different tissue types (e.g. bone or brain). Once this is done, the local standard deviation ⁇ may then be calculated only for those voxels which have been classified by the conventional segmentation algorithm as corresponding to brain. This is because there would be no need to perform the computation for voxels which have already been distinguished from the tissue type of interest by the conventional segmentation algorithm.
  • the calculation may only be made for voxels in a VOI identified by the user.
  • the distinguishing parameter i.e. the local standard deviation for the example characterizing parameter distributions seen in FIGS. 12 A-D
  • the distinguishing parameter is used to classify each of the other voxels. This is performed in this example by defining a critical local standard deviation ⁇ c (marked in FIG. 12C ) between the average local standard deviation for the positively selected voxels and the average local standard deviation of the negatively selected voxels. If the local standard deviation computed in Step 115 for a particular voxel is greater than ⁇ c , the voxel is classified as belonging to the tissue type of interest. If the local standard deviation is less than ⁇ c , the voxel is classified as not belonging to the tissue type of interest.
  • the computed value of one of the characterizing parameters is itself identified as being able to distinguish between the tissue type of interest and surrounding tissue
  • the ratio of two different characterizing parameters has a greater discriminating power between positively and negatively selected voxels than either of the characterizing parameters themselves.
  • a numerical example of how this can arise is if values generally between 2.5 and 3.5 (arbitrary unit) are found for one characterizing parameter for both positively and negatively selected voxels and values generally between 5 and 7 (arbitrary units) are found for another characterizing parameter, again for both positively and negatively selected voxels.
  • tissue type of interest forms a single volume
  • connectivity requirement This would mean voxels which are not linked to the positively selected voxels by a chain of voxels classified as corresponding to the tissue type of interest will be classified as not corresponding to this tissue type, even if their distinguishing parameters are such that they would otherwise be considered to do so.
  • the user may proceed to examine those corresponding to the tissue type of interest as desired. For example, the user may render an image showing only the tissue type of interest.
  • the tissue type of interest may be shown in one color and other tissue types in other colors, that is to say the method shown in FIG. 11 may be used as the basis of calculating presets. This could be realized when a monochrome image of the brain is displayed.
  • the classification could be used to distinguish between white and gray matter in the brain. Based on the classification, the gray matter is displayed shaded in a semi-transparent blue color wash.
  • the selected object can be measured in some way, for example the volume is calculated. Another example is that the unclassified parts (“don't want” regions) are “dimmed”, i.e. rendered semi-transparent.
  • an image based on the distinguishing parameter itself may be rendered (e.g. using the distinguishing parameter as the imaged parameter in the rendering rather than voxel value).
  • an image based on the local standard deviation for each of the voxels may be rendered instead. Ranges of color and/or opacity may be associated with different values of local standard deviation and an image rendered accordingly.
  • Visualization presets for the rendered image may be calculated as previously described, for example. This approach can provide for a displayed image in which a user can easily distinguish the tissue type of interest from surrounding tissue because characteristics of the tissue type of interest which differentiates it from its surroundings are used as the basis for rendering the image.
  • the classification may be used in conjunction with conventional analysis techniques, for example to calculate the volume of the anomalous region corresponding to the tissue type of interest. It will of course be appreciated that in some cases a region of interest might be of interest merely because the user wishes to identify it so it can be discarded from subsequent display or analysis.
  • Step 111 and Step 112 could be reversed, or even intertwined. That is so say, a user could identify some voxels which correspond to the tissue type of interest, then some voxels which do not correspond to the tissue type of interest, and then some more voxels corresponding to the tissue type of interest and so on (i.e. in effect cycle between Step 111 and Step 112 ).
  • the process may return to earlier steps during execution. For example, a user may be alerted at Step 114 if there are no characterizing parameters having a discriminating power above a predetermined level. In response to this, the user may choose to return to Step 111 and/or Step 112 to provide more examples. Alternatively, in such a circumstance the user may instead indicate that additional characterizing parameters should be determined and their discriminating powers examined, or may simply choose to proceed with the classification nonetheless.
  • the method shown in FIG. 11 may be modified in a number of ways. For example, rather than simply having a binary classification (i.e. classifying voxels as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest) a probability classification may be used. Each voxel may be attributed a likelihood of corresponding to the same tissue type as the positively selected voxels on the basis of how much its distinguishing parameter differs from those of the negatively selected voxels. In this scheme, a voxel having a local standard deviation of ⁇ 1 shown in FIG. 12C would be classified as having a greater probability of belonging to the population of voxels corresponding to the tissue type of interest than one having a local standard deviation of ⁇ 2 .
  • more than one distinguishing parameter may be used for the classification. For example, if multiple parameters are identified in Step 114 as being capable of distinguishing between the positively and negatively selected voxels, these multiple distinguishing parameters may each then be computed for the other voxels in Step 115 .
  • the classification in Step 116 could then be based on a conventional multi-dimensional expectation maximization (EM) algorithm or other cluster recognition process which takes the distinguishing parameters computed for the positively and negatively selected voxels as seeds for defining for the populations of voxels (i.e. the population of voxels corresponding to the tissue type of interest and the population of voxels not corresponding to the tissue type of interest).
  • EM expectation maximization
  • Example classification schemes when the distinguishing function has two or more characterizing parameters are multivariant Gaussian maximum likelihood and k-nn (nearest neighbors).
  • the EM algorithm provides the distributions for the positive and negative cases which then allows, for each voxel, a probability to be determined that the voxel is a member of the population exemplified by the positively selected voxels, that is to say a probability that the voxel corresponds to the tissue type of interest.
  • the EM algorithm may also provide an estimate of the overall fraction of voxels which are members of the population exemplified by the positively selected voxels. This information allows an image of the tissue type of interest to be rendered from the volume data in a number of ways.
  • One way is to render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, and render the remaining voxels using conventional techniques based on their voxel values (e.g. opacity to X-rays for CT data).
  • the threshold level may be selected arbitrarily, for example at 50%, or may be selected such that the total number of voxels falling above the threshold level corresponds to the overall fraction of voxels which are members of the population exemplified by the positively selected voxels predicted by the EM algorithm.
  • Another way of generating an image showing the tissue type of interest would be to again render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, but to then render the remaining voxels based on their probability of corresponding to the tissue type of interest, rather than their voxel values.
  • This provides a form of probability image from which a user can immediately identify the likelihood of individual areas being correctly classified as corresponding to the tissue type of interest.
  • the user may be presented with the opportunity of manually altering the threshold level. This allows the user to determine an appropriate compromise between including too many false negatives (i.e. voxels which do not correspond to the tissue type of interest) and excluding too many true positives (i.e. voxels which do correspond to the tissue type of interest).
  • Step 111 a user wishes to identify multiple tissue types. This can be achieved by a user making positive selections for each of the different tissue types of interest in Step 111 shown in FIG. 11 .
  • a unique distinguishing feature identified in step 114 can be used to classify the voxels.
  • the user if in addition to the positive selection of voxels corresponding to the anomalous region of brain discussed above, the user is also interested in further examination of a second anomalous region cited elsewhere in the brain, the user simply makes some positive selections of that region. If the second anomalous region is represented by voxels having voxels values which are generally higher than the negatively selected voxels, but having a similar local standard deviation, then, unlike the voxels in the first anomalous region, they cannot be classified on the basis of local standard deviation. This means in Step 114 both local standard deviation ⁇ and voxel value V will be determined to be distinguishing parameters and both will be calculated in Step 115 for other voxels in the data.
  • voxels may then be classified as corresponding to one of the tissue types of interest if either their local standard deviation is different to that of the negatively selected voxels (in which case they relate to the first anomalous region) or if their voxel value is different to that of the negatively selected voxels (in which case they relate to the second anomalous region).
  • the method may also be applied in an iterative manner. For example, following execution of the method shown in FIG. 11 a probability image showing the classification of the voxels may be displayed to the user. The user may then decide to refine the classification by re-executing the method on the basis of the probability image. This is a form of relaxation labeling and allows for additional spatial information to be exploited in each subsequent iteration.
  • the computation of the distinguishing features may include additional analysis techniques to assist in the proper classification of voxels. For example, partial volume effects might cause a boundary between two types of tissue which are not of interest to be wrongly classified. If this is a concern in a particular situation, techniques such a partial volume filtering as described in WO 02/084594 [1] may be employed when computing the distinguishing parameters.
  • the user input may additionally include clinical information, such as specification of tissue type or anatomical feature of interest.
  • the user input may adopt the paradigm “want that gray matter—don't want that white matter”, or “want that liver—don't want that other (unspecified) tissue”, or “want that (liver) tumor—don't want that healthy (liver) tissue”, or “want that (unspecified) tissue—don't want that fat tissue”.
  • This user input can be done by appropriate pointer selection in combination with filling out a text label or selection from a drop down menu of options.
  • the distinguishing function can then determined from the characterizing parameters having regard to the clinical information input by the user. For example, if the positively selected voxels are indicated as belonging to a tumor, local standard deviation may be preferentially selected as the distinguishing function, since this will be sensitive to the enhanced granularity that is an attribute of tumors.
  • volume data sets of a single patient may be available, for example from different imaging modalities or from the same imaging modality but taken at different times. If the images can be appropriately registered with one another, it is possible to classify voxels in one of these volume data sets on the basis of positively and negatively selected voxels in another. Distinguishing parameters may even be based on an analysis of voxels in one data set yet be used to classify voxels in another data set. This can help because with more information made available, it is more likely that a good distinguishing parameter can be found.
  • the described embodiments employ a computer program operating on a general purpose computer, for example a conventional computer workstation
  • special purpose hardware could be used.
  • at least some of the functionality could be effected using special purpose circuits, for example a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC) or in the form of a graphics processing unit (GPU).
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • multi-thread processing or parallel computing hardware could be used for at least some of the processing. For example, different threads or processing stages could be used to calculate respective characterizing parameters.

Abstract

A computer automated method that applies supervised pattern recognition to classify whether voxels in a medical image data set correspond to a tissue type of interest is described. The method comprises a user identifying examples of voxels which correspond to the tissue type of interest and examples of voxels which do not. Characterizing parameters, such as voxel value, local averages and local standard deviations of voxel value are then computed for the identified example voxels. From these characterizing parameters, one or more distinguishing parameters are identified. The distinguishing parameter are those parameters having values which depend on whether or not the voxel with which they are associated corresponds to the tissue type of interest. The distinguishing parameters are then computed for other voxels in the medical image data set, and these voxels are classified on the basis of the value of their distinguishing parameters. The approach allows tissue types which differ only slightly to be distinguished according to a user's wishes.

Description

    BACKGROUND OF THE INVENTION
  • The invention relates to the setting of visualization parameter boundaries, such as color and opacity boundaries, for displaying images, in particular two-dimensional (2D) projections from three-dimensional (3D) data sets.
  • When displaying an image, such as in medical imaging applications, it is known to associate particular signal values with particular colors and opacities (known as visualization parameters) to assist visualization. This mapping is done when using data from a 3D data set (voxel data set) to compute a 2D data set (pixel data set) representing a 2D projection of the voxel data set for display on a computer screen or other conventional 2D display apparatus. This process is known as rendering.
  • The 2D data set is more amenable to user interpretation if different colors and opacities are allocated to different signal values in the 3D data set. The details of the mapping of signal values to colors and opacities are stored in a look-up table which is often referred to as the RGBA color table (R, G, B and A referring to red, green, blue and alpha (for opacity) respectively). The color table can be defined such that an entire color and opacity range is uniformly distributed between the minimum and maximum signal values in the voxel data set, as in a gray scale. Alternatively, the color table can be defined by attributing different discrete colors and opacities to different signal value ranges. In more sophisticated approaches, different sub-ranges are ascribed different colors (e.g. red) and the shade of the color is smoothly varied across each sub-range (e.g. crimson to scarlet).
  • When displaying data such as in medical imaging, the signal values comprising the data set do not usually correspond to what would normally be regarded as visual properties, such as color or intensity, but instead correspond to detected signal values from the measuring system used, such as computer-assisted tomography (CT) scanners, magnetic resonance (MR) scanners, ultrasound scanners and positron-emission-tomography (PET) systems. As an example, signal values from CT scanning will represent tissue opacity, i.e. X-ray attenuation. In order to improve the ease of interpretation of such images it is known to map different colors and opacities to different ranges of display value such that particular features, e.g. bone (which will generally have a relatively high opacity) can be more clearly distinguished from soft tissue (which will generally have a relatively low opacity).
  • When displaying a 2D projection of a 3D data set, in addition to attributing distinct ranges of color to voxels having particular signal value ranges, voxels within the 3D data set may also be selected for removal from the projected 2D image to reveal other more interesting features. The choice of which voxels are to be removed, or sculpted, from the projected image can also be based on the signal value associated with particular voxels. For example, those voxels having signal values which correspond to soft tissue can be sculpted, i.e. not rendered and therefore “invisible”, thereby revealing those voxels having signal values corresponding to bone which would otherwise be visually obscured by the soft tissue.
  • The determination of the most appropriate color table (known in the art as a preset) to apply to an image derived from a particular 3D data set is not trivial and is dependent on many features of the 3D data set. For example, the details of a suitable color table will depend on the subject, what type of data is being represented, whether (and if so, how) the data are calibrated and what particular features of the 3D data set the user might wish to highlight, which will depend on the clinical application. It can therefore be a difficult and laborious task to produce a displayed image that is clinically useful. Furthermore, there is inevitably an element of user-subjectivity in manually defining a color table and this can create difficulties in comparing and interpreting images created by different users, or even supposedly similar images created by a single user. In addition, the user will generally base the choice of color table on a specific 2D projection of the 3D data set rather than on characteristics of the overall 3D data set. A color table chosen for application to one particular projected image will not necessarily be appropriate to another projection of the same 3D data set. A color table which is objectively based on characteristics of the 3D data set rather than a single projection would be preferred.
  • Accordingly, there is a need in the art for a method of automatically determining appropriate color table presets when displaying medical image data.
  • SUMMARY OF THE INVENTION
  • According to the invention there is provided a method of setting visualization parameter boundaries for displaying an image from a 3D data set comprising a plurality of voxels, each with an associated signal value, comprising: selecting a volume of interest (VOI) within the 3D data set; generating a histogram of signal values from voxels that are within the VOI; applying a numerical analysis method to the histogram to determine a visualization threshold; and setting at least one of a plurality of boundaries for a visualization parameter according to the visualization threshold.
  • By restricting the histogram to voxels taken from the VOI, a numerical analysis method can be applied to the histogram which is sensitive to subtle variations in signal value and can reliably identify significant boundaries within the 3D data set for visualization. This allows the visualization parameter boundaries to be set automatically, which is especially useful for 3D data sets for which the signal values have no calibration, as is the case for MR scans.
  • In some embodiments, a first visualization parameter boundary is set at the visualization threshold. In other embodiments, first and second visualization parameter boundaries are set either side of the visualization threshold. This latter approach can be advantageous if an opacity curve interpolation algorithm is used to calculate an opacity curve between the visualization parameter boundaries.
  • The numerical analysis method may be applied once to determine only one visualization threshold. Remaining visualization parameter boundaries can then be set manually. Alternatively, the numerical analysis method can be applied iteratively to the histogram to determine a plurality of visualization thresholds and corresponding visualization parameter boundaries.
  • A significance test may be applied to visualization thresholds and, according to the outcome of the significance test, a significance marker can be ascribed for those ones of the voxels having signal values at or adjacent the visualization threshold, wherein the significance marker indicates significance or insignificance of the visualization threshold.
  • If two visualization parameter boundaries are set, one each side of the visualization threshold, and the visualization threshold is determined to be significant, then it is convenient to mark as significant only the voxels having signal values at one of the two visualization parameter boundaries. In one example, if a visualization threshold is calculated by the numerical analysis method to lie at a signal value of 54, and visualization parameter boundaries are set at 54±3, i.e. at 51 and 57, then the voxels with signal values of 57 can be marked as significant, and the voxels with signal values of 51 as insignificant.
  • The significance test can be used to distinguish between visualization parameter boundaries used as enhancements to visualizations of a single tissue type (known as cosmetic boundaries) and those used to identify different tissue-types for the purpose of segmentation (known as significant boundaries). Accordingly, the method may further comprise applying a selection tool to the 3D data set, wherein the selection tool is sensitive to the significance markers. One or more of the selection tools can be designed to ignore voxels that have been marked as insignificant.
  • The rate of change of a visualization parameter across a visualization parameter boundary may also be modified based on the significance of the visualization parameter boundary. A sharpness parameter can be calculated for determining what rate of change of the visualization parameter to apply at a boundary.
  • In some embodiments of the invention, the sharpness parameter is the same as the significance marker. The sharpness need not simply be a binary operand, but can adopt a range of integer values, for example from 0 to 100. A sharpness of zero indicates an insignificant boundary, which is referred to as a cosmetic boundary in view of its irrelevance to selection tools. A sharpness of 100 indicates a boundary that has the maximum degree of significance. Intermediate values are used to indicate intermediate significance. In addition to affecting the blending of visualization parameters, the non-zero values may be used for filtering by the selection tools so that boundaries with a significance value of, for example, 5 are significant to some but not all selection tools, a boundary with a significance value of 50 is significant for a greater subset of the selection tools, and a boundary with the maximum significance value of 100 is significant to all selection tools. Alternatively, the non-zero significance values may be used by selection tools to resolve conflicts between different marked boundaries, with boundaries having higher significance values taking precedence. Examples of selection tools are tools for marking objects in a set of connected or unconnected voxels with a visualization parameter (e.g. color or opacity) between two significant visualization parameter boundaries, multiple groups of connected or unconnected voxels above a significant boundary or multiple bands of connected or unconnected voxels below a significant boundary. Marked voxels could then, for example, be sculpted. Sculpting is a well known term of art used to describe voxels that are marked to be transparent from view irrespective of their signal values.
  • In the best mode of the invention, the numerical analysis method comprises: forming a convex hull of a plurality of segments around the histogram; determining which perpendicular from the segments to the histogram has the greatest length; and taking the signal value at the intersection between the histogram and the perpendicular as the visualization threshold. The sharpness value and the significance test can then be based on the length of the perpendicular determined to have the greatest length. For example, the visualization threshold can be determined to be insignificant if the ratio of the length of the perpendicular to a parameter derived from the signal value range and/or the frequency range of the histogram is below a minimum score.
  • For some automatic presets, the numerical analysis method is applied to the histogram within a predetermined restricted range of signal values to search for a visualization threshold within that restricted range. This will be particularly useful for 3D data sets with calibrated signal values, such as X-ray data sets calibrated in Hounsfield units. Accordingly, the restricted range may be defined in terms of Hounsfield units.
  • To provide the user with information about the nature of the automatically calculated thresholds, the histogram and its visualization parameter boundaries can be displayed to the user together with the image created from the 3D data set, thus making the user aware of the visualization parameter boundaries determined by the automatic preset.
  • The method of the invention is particularly powerful in that it can take account of sculpting performed on the 3D data set prior to automatic preset determination according to the invention. A common example of sculpting will be when a plane is defined through a 3D data set and all voxels to one side of the plane are not rendered, irrespective of their signal values. Another example of sculpting will be the removal of a given set of connected voxels with signal values in a specified range, thus restricting the range of signal values to be visualized prior to determining an automatic preset. Sculpting can be taken account of by restricting the histogram to unsculpted voxels in the VOI.
  • It has been recognized that voxels with the highest and lowest signal values often constitute bad data which can skew the results of the numerical analysis of the histogram. Accordingly, it is preferred that voxels with the highest and/or the lowest signal values are excluded from the numerical analysis method. For example, the voxels with the lowest and highest 0.1% of the signal values can be excluded. Other proportions could also be envisaged.
  • In some implementations the method may operate interactively. In such cases, if a user re-defines the VOI, the method of setting visualization parameter boundaries is automatically reapplied to continuously provide the most appropriate visualization parameter boundaries.
  • The invention further provides a computer program product bearing computer readable instructions for performing the method of the invention.
  • The invention also provides a computer apparatus loaded with computer readable instructions for performing the method of the invention.
  • According to a further aspect of the invention there is provided a method of numerically processing a medical image data set comprising voxels, the method comprising: receiving user input to positively and negatively select voxels that are and are not of a tissue type of interest; determining a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and classifying further voxels in the medical image data set on the basis of the distinguishing function. This method thus applies supervised pattern recognition to classify the voxels.
  • By receiving input in response to a user specifying both positive examples of voxels (i.e. those which do correspond to the tissue type of interest) and negative examples of voxels (i.e. those which do not correspond to the tissue type of interest), the method is able to objectively classify further voxels in the data set. Because of this, the method provides for an easy and intuitive to use technique for allowing users to select regions of interest for further examination or removal from the data set.
  • The method may include presenting a representative (2D) image derived from the (3D) medical image data set to a user, such as a sagittal, coronal or transverse section view, whereby the user selects voxels by positioning a pointer at appropriate locations in the example image. An example voxel may then be taken to be a voxel whose coordinates in the medical image data set map to the location of the pointer in the example image. Alternatively, for a single positioning of the pointer, a number of example voxels may be selected, for example those in a region surrounding a voxel whose coordinates in the data set map to the location of the pointer in the example image may be taken as being selected. Selecting multiple voxels with a single positioning of the cursor allows for a more statistically significant sample of example voxels to be provided with little additional user input.
  • At least one of the one or more characterizing parameters of a voxel may be a function of surrounding voxels. For example, a local average, a local standard deviation, gradient magnitude, Laplacian, minimum value, maximum value or any other parameterization may be used. This allows voxels to be classified on the basis of characteristics of their surroundings, rather than simply on the basis of their voxel value. This means that similar tissue types can be properly classified more accurately than with conventional classification methods based on voxel value alone. This is because subtle difference in “texture” in the vicinity of a voxel can help to distinguish it from other voxels having otherwise similar voxel values. It is also noted that for some modalities such as MR there may be multiple voxel values, such as T1 and T2 in multi-spectral MR, which could each be used to define a separate characterizing parameter. These could be used collectively in combination to set the distinguishing function.
  • Moreover, the user input may additionally include clinical information, such as specification of tissue type or anatomical feature, regarding either the positively or negatively selected voxels, or both. Following this user input, the distinguishing function can then determined from the characterizing parameters having regard to the clinical information input by the user.
  • Once the voxels have been classified, an image of the data set may be rendered which takes account of the classification of voxels. The rendered image may then be displayed to the user. For example, the positively selected voxels may be tinted with a color in a monochrome gray scale rendering.
  • In some examples, a binary classification may be used whereby voxels are classified as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest. In these cases, voxels classified as not corresponding to the tissue type of interest may be rendered as transparent or semi-transparent in a displayed image. The general practice of rendering features that are not of interest as semi-transparent is sometimes referred to as “dimming” in the art. Alternatively, voxels which are classified as corresponding to the tissue type of interest may be rendered as transparent, or voxels classified as corresponding to the tissue type of interest may be rendered to be displayed in one range of displayable colors and voxels classified as not corresponding to the tissue type of interest being rendered to be displayed in another range of displayable colors.
  • An image based on rendering a volume data set representing the value of the distinguishing function of the voxels can also be made.
  • In other examples, rather than using a binary classification, voxels may be classified according to a calculated probability that they correspond to the tissue type of interest. In these cases, an image may be generated by rendering of a volume data set representing the probability that the voxels correspond to the tissue type of interest, rather than rendering based on voxel values themselves. For example, the probability can be mapped onto opacity of the rendered material instead of taking a threshold. Another approach would be to render as transparent any voxels having a probability of corresponding to the tissue type of interest of less than a certain value.
  • Where the classification provides an estimated probability for each voxel, the probabilities per voxel may themselves be considered as voxel values in a medical image data set which may be re-classified according in a subsequent iteration of the method. This implements a form of relaxation labeling.
  • Further, it will be appreciated that the user input can be prompted by displaying an image to a user from a 3D data set comprising a plurality of voxels, each with an associated signal value, for example by selecting a volume of interest (VOI) within the 3D data set; generating a histogram of signal values from voxels that are within the VOI; applying a numerical analysis method to the histogram to determine a visualization threshold; and setting at least one of a plurality of boundaries for a visualization parameter according to the visualization threshold.
  • According to a further aspect of the invention there is provided an apparatus for numerically processing a medical image data set comprising voxels, the apparatus comprising: storage from which a medical image data set may be retrieved; a user input device configured to receive user input to positively and negatively select voxels that are and are not of a tissue type of interest; and a processor configured to determine a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and to classify further voxels in the medical image data set on the basis of the distinguishing function.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
  • For a better understanding of the invention and to show how the same may be carried into effect reference is now made by way of example to the accompanying drawings in which:
  • FIG. 1 shows a generic computer tomography scanner for generating a 3D data set;
  • FIG. 2 a shows a 2D projection of a 3D data set with tissue opacity values being represented by a linear gray-scale;
  • FIG. 2 b schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 a;
  • FIG. 2 c shows a 2D projection of a 3D data set with ranges of tissue opacity values being represented by ranges of a gray-scale defined by presets;
  • FIG. 2 d schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 c;
  • FIG. 2 e shows a 2D projection of a 3D data set with ranges of tissue opacity values being represented by ranges of colors defined by presets;
  • FIG. 3 shows a histogram of data values within a volume of interest (VOI) within a 3D data set;
  • FIG. 4 shows a flow chart of an automatic preset determination method according to an embodiment of the invention;
  • FIG. 5 a shows a histogram of data values within a VOI within a 3D data set and to which a first convex hull has been applied to determine a first visualization threshold;
  • FIG. 5 b shows a histogram of data values within a VOI within a 3D data set and to which a second convex hull has been applied to determine a second visualization threshold;
  • FIG. 5 c shows a histogram of data values within a VOI within a 3D data set and to which a third convex hull has been applied to determine a third visualization threshold;
  • FIG. 5 d shows a histogram of data values within a VOI within a 3D data set and to which a fourth convex hull has been applied to determine a fourth visualization threshold;
  • FIG. 6 shows a computer system for storing, processing and displaying medical image data;
  • FIG. 7 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a first example of the invention, referred to as “Active MR”;
  • FIG. 7 b shows an example image displayed according to the automatic preset of FIG. 7 a;
  • FIG. 8 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a second example of the invention, referred to as “Active Bone (CT)”;
  • FIG. 8 b shows an example image displayed according to the automatic preset of FIG. 8 a;
  • FIG. 9 a shows a visualization state tool loaded with a VOI from a 3D data set for which color boundaries have been determined according to an automatic preset according to a third example of the invention, referred to as “Active Angio (CT)”;
  • FIG. 9 b shows an example image displayed according to the automatic preset of FIG. 9 a;
  • FIG. 10 schematically shows an example display of an image and associated section views which a user may employ to identify a tissue type of interest;
  • FIG. 11 is a flow chart schematically showing a method for classifying whether voxels in a volume data set belong to a tissue type of interest according to an embodiment of the invention; and
  • FIGS. 12A-12D schematically show the distribution of a number of different characterizing parameters computed for example voxels identified by a user as belonging to different tissue types.
  • DETAILED DESCRIPTION
  • FIG. 1 is a schematic perspective view of a generic CT scanner 2 for obtaining a 3D scan of a region of a patient 4. An anatomical feature of interest (in this case a head) is placed within a circular opening 6 of the CT scanner 2 and a series of X-ray exposures is taken. Raw image data is derived from the CT scanner and could comprise a collection of one hundred 2D 512*512 data subsets, for example. These data subsets, each representing an X-ray image of the region of the patient being studied, are subject to image processing in accordance with known techniques to produce a 3D representation of the feature imaged such that various user-selected 2D projections of the 3D representation can be displayed (typically on a computer monitor). The techniques for generating such 3D representations of structures from collections of 2D data subsets are known and will not be described further herein.
  • FIGS. 2 a, 2 c and 2 e show example 2D images of the same projection from a 3D CT data set but with different presets. FIGS. 2 b and 2 d show graphical representations of the color and opacity curve mappings used in generating the 2D images shown in FIG. 2 a and 2 c respectively. FIGS. 2 a-e are included to illustrate the effect of presets on such images before describing how presets are implemented in specific embodiments of the invention.
  • FIG. 2 a shows an example 2D image which is a projection of a 3D data set obtained from a CT scanner. A VOI within the 3D data set has been selected for display. The material surrounding the VOI is not rendered in the projection. The image is displayed with a uniform gray-scale ranging from black to white.
  • FIG. 2 b schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 a. FIG. 2 b includes a plot of a binned frequency distribution of the signal values in the VOI with a superposed line plot of opacity as a function of signal value (red curve). The shading of the area under the binned frequency distribution plot indicates the mapping of colors to signal value in the rendering.
  • There are three tissue types in the projected image. These are a region of bone 26, a region of soft tissue 30 and a barely visible network of blood vessels 28. The high X-ray stopping power of bone, compared to that of blood and soft tissue, makes the region of bone 26 easily identifiable in the image due to the high opacity associated with the associated voxels. However, since the opacities of blood and soft tissue are more similar, they are not as clearly distinguished. In particular, it is difficult to see the blood vessel network.
  • FIG. 2 c shows a 2D image of the same projection as FIG. 2 a, but rendered with a different color and opacity mapping, i.e. a different preset.
  • FIG. 2 d schematically shows a graphical representation of the color and opacity curve mappings used in generating the 2D image shown in FIG. 2 c. FIG. 2 d includes a plot of a binned frequency distribution of the signal values in the VOI and a superposed line plot of opacity as a function of signal value (red curve). The shading of the area under the binned frequency distribution plot again indicates the mapping of colors to signal values in the rendering.
  • In FIG. 2 c, signal values indicative of soft tissue voxels have been colored significantly differently from signal values indicative of blood vessel voxels. This makes the interpretation of the blood vessels much clearer. In practice, however, a wider range of colors will be available than can be reliably shown in a black-and-white figure such as FIG. 2 c, and the three tissue types could be shown in distinctly different colors.
  • FIG. 2 e shows a 2D image of the same projection as FIGS. 2 a and 2 b, but rendered with a different color and opacity mapping, i.e. a different preset. FIG. 2 e shows the signal values of the voxels within the VOI (and hence the corresponding portions in the projected image) as different colors. Voxels associated with the blood vessel network are shaded yellow, those of the soft tissue are allocated shades of transparent red and those of the bone are allocated shades of cream.
  • The range of displayable colors and opacities to which the voxel signal values are to be mapped (i.e. the entries in the color table) will in general depend on the specific application. For example, in one application images might be represented using five color ranges with the color ranges being black, red, orange, yellow and white. The opacity level from black to white might range from 0% to 100% opacity with the colors mixing in relation to the opacity curve. This would allow for a smooth transition between bands, with the color and opacity values at the upper edge or boundary of one range matching the color and opacity values at the lower edge of boundary of the adjacent range. In this way, the five ranges blend together at their boundaries to form a smoothly varying and continuous spectrum. The colors at the bottom boundary of the first range and the top boundary of the fifth range are black and white respectively.
  • Each of these five color ranges can be mapped to the voxel signal values which represent different tissue types to distinctly identify the different tissues. For example, bone might be represented in shades of cream, blood in shades of yellow, a kidney in shades of orange and so on, all with varying opacities. The task of attributing color and opacity to different tissue types becomes one of determining suitable signal values, hereafter referred to as visualization thresholds, for defining boundaries between the ranges.
  • Sub-ranges of signal values between color boundaries may be taken to represent different tissue types. The shades of color available within the color range associated with each tissue type can then be appropriately mapped to the sub-range of signal values. For instance, in one example there are 32 shades of cream available for coloring bone, and bone is associated with signal values between 154 and 217 (in arbitrary units). A color look-up table could then associate the first shade of cream with signal values 154 and 155, the second shade of cream with signal values 156 and 157, the third with 158 and 159 and so on through to associating the thirty-second shade of cream with signal values 216 and 217.
  • FIG. 3 is a histogram which schematically shows the binned frequency distribution F of an example set of voxels within a selected VOI as a function of signal value D. The signal values D may be in arbitrary units (typical for MR) or calibrated units (such as Hounsfield units (HU) that are used for CT and other types of X-ray imaging). The histogram shown in FIG. 3 represents the voxels within a VOI from which a 2D projected image (such as those shown in FIGS. 2 a, 2 c and 2 e) can be derived and the signal values represent X-ray attenuation calibrated in HUs.
  • The signal values D of the voxels within the selected VOI are distributed between a minimum value S and a maximum value E. Within this overall range of signal values, four distinct voxel value sub-ranges are evident. A first voxel value sub-range I has a narrow peak at relatively high signal values, a second voxel value sub-range II has a relatively broad peak, a third voxel value sub-range III has a shoulder on the lower signal value side of the second voxel value sub-range II and a fourth voxel value sub-range IV has a shoulder on the lower signal value side of the third voxel value sub-range III.
  • The different voxel value sub-ranges I-IV identified in the histogram are likely to relate to different tissue types in the 3D data set and so would benefit from being displayed with different color ranges. For example, one might reasonably infer that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub-range III corresponds to soft tissue and sub-range IV represents the background tissue type or air.
  • It is not necessary when displaying the images to pre-associate voxel value sub-ranges in the histogram with particular tissue types. In fact, if the signal values in the data set are un-calibrated it may not even be possible to do so from the signal values alone. Nonetheless, if distinct voxel value sub-ranges in the histogram can be identified by a numerical analysis, derived images can be shown with different tissue types clearly and consistently displayed without the need for user-driven post-display processing.
  • FIG. 4 is a flow chart showing the steps involved in determining color boundaries for allocating displayable colors based on a numerical analysis of signal values. In a first step 31, a 3D data set of signal values is provided. The 3D data set in this example is provided by an MR scanner and is calibrated in arbitrary units. However, any 3D data set could equally well be used. In a next step 32, a VOI within the data set is selected. This step of selecting a VOI within may be performed manually or automatically, for instance using a connectivity algorithm. In a next step 33, a histogram of the signal values of the voxels within the VOI is generated, such as the one shown in FIG. 3, for example. In a next step 34, a set of extreme signal values are identified. Exclusion of this set of extreme signal values from subsequent steps of the determination of color boundaries helps to avoid any undesirable skewing of the results by signal values which are not considered statistically significant. Such extreme values might be caused by highly attenuating medical implants (such as screws) or defects in the image data set, for example. By ignoring a fraction of the highest and lowest signal values, such as the extreme 0.1% of voxels at each end of the range, any of these extreme outlier voxels within the VOI will not unduly skew the results of the numerical analysis. However, if there are no extreme outlier voxels, the numerical analysis will not be unduly affected by ignoring a relatively small fraction of the histogram. After excluding the extreme data, the histogram is effectively considered to run between signal values L and U, where signal values between S and L, and between U and E are those which are excluded. Whereas in this example a default fraction of extreme voxels are excluded, the fraction discarded could also depend on characteristics of the data set and/or a subject being studied.
  • In a next step 35, an iteration parameter n is given an initial value of 1. The iteration parameter is indicative of how many visualization thresholds for determining visualization parameters have already being determined, at this stage of the flow chart, a value of n means that n−1 visualization thresholds have previously been found. In a next step 36, an nth convex hull of the histogram is determined. The nth convex hull is defined to be a curve spanning the signal value range L to U and which comprises that series of line segments which combine to form the shortest possible single curve drawn between the histogram values at L and U subject to the condition that at any given signal value within the range, the curve must be greater than or equal to the value of the histogram, and further subject to the condition that the nth convex hull must also meet the histogram profile at all previously identified color boundaries.
  • FIG. 5 a shows the histogram previously shown in FIG. 3 but on which the first (i.e. nth where n=1) convex hull spanning the histogram has been drawn (since n=1, there are no previously identified color boundaries at which the first convex hull must meet the histogram). It can be seen that the first convex hull contains two extended straight line portions marked b and a. These are the distinct and continuous sections of the first convex hull which deviate from following the histogram profile by “cutting comers” and thereby minimizing the integrated length of the convex hull.
  • In a next step 37, an nth visualization threshold Tn is found by determining the point on the histogram profile for which the nth convex hull has the maximum nearest distance. This is the point on the histogram from which the longest possible line can be drawn to meet the nth convex hull perpendicularly. This point, which must intersect the nth convex hull on one of its extended straight line portions, can be determined, to a finite accuracy, by a number of known techniques. The longest perpendicular which can be drawn between the first convex hull and the histogram profile shown in FIG. 5 a connects to the straight line portion marked b and is indicated in the figure by the dotted-line marked c. This line intersects the histogram profile at signal value T1 and so defines a first visualization threshold of T1. The first visualization threshold T1 divides the histogram into two ranges, one running between signal values L and T1 and one running between signal values T1 and U. It is apparent from FIG. 5 a that the range between signal values T1 and U corresponds closely with the first voxel value sub-range I identified in the histogram indicated in FIG. 3.
  • In a next step 38 shown in FIG. 4, a significance parameter for the nth visualization threshold Tn is determined. The determination of the significance parameter is discussed further below.
  • In a next step 39, one or more color boundaries are set based on the nth visualization threshold Tn. In this example, a single color boundary threshold is set at the data value matching the visualization threshold Tn. In other examples, depending on clinical application, and/or the requirements of subsequent visualization tools, it may be preferable to associate two color boundaries with a single visualization threshold. For instance, by setting first and second color boundaries at signal values slightly displaced to the lower and higher signal value side of the signal value of an nth visualization threshold, for example at data values Tn+/−3 in arbitrary units, a rapid change in colors and opacities allotted to signal values in the vicinity of the visualization threshold occurs which can help to highlight features of the boundary in a subsequently displayed 2D image.
  • In a next step 40, a sharpness value is set for each of the one or more color boundaries associated with the nth visualization threshold Tn. The sharpness value may be based on the significance parameter of nth visualization threshold Tn and can be used to assist in displaying images. The sharpness value may, for example, range from 0 to 100 and be used to determine a level of color blending between the colors near to a color boundary. Increased color blending can be set to occur at boundaries with relatively low sharpness values. This ensures that low significance boundaries appear less harsh in a displayed image. Conversely, little or no color blending is applied to boundaries with relatively high sharpness values. This ensures high significance boundaries appear well defined in a displayed image. If multiple color boundaries are set according to a single visualization threshold, it may be convenient to associate a sharpness value based on the significance parameter of the visualization threshold with only a single one of the multiple color boundaries, and to set a fixed sharpness value for the other of the multiple color boundaries. Since the sharpness values are based on the significance parameter, sharpness values can also be taken as a measure of the significance of a visualization threshold and associated boundaries. The significance of a boundary may play a role in further processing as discussed further below.
  • In a next step 41, a test is performed to determine whether additional color boundaries are required. The iteration parameter n, which at this stage of the flow chart indicates how many visualization thresholds have been determined, is compared with the total number (Ntotal) of visualization thresholds required. Ntotal will depend on the number of displayable color ranges and how many color boundaries have been set for each of the visualization thresholds. For example, in this case, where all visualization thresholds provide a single color boundary, and if there are five displayable color ranges (hence four color boundaries), four visualization thresholds are required and Ntotal=4. However, in another example, a particular application might require a color boundary to be set either side of the first visualization threshold and single color boundaries to be set at each subsequent visualization threshold. In such a case the four color boundaries associated with five displayable color ranges would be set after determining only three visualization thresholds since the first visualization threshold sets two color boundaries, accordingly Ntotal=3.
  • If it is determined that no further color boundaries are required (i.e. n=Ntotal) the flow chart follows the N-branch from step 41 to step 42 where the preset determination is complete. If further color boundaries are required (i.e. n<Ntotal), the flow chart follows the Y-branch from step 41 to a step 42 where the iteration parameter n is incremented, and then returns to step 36 to continue as described above.
  • FIG. 5 b shows the histogram previously shown in FIG. 3, but on which the data value Tn of the first visualization threshold (and hence in this example also a first color boundary) and the second (i.e. nth where n=2) convex hull are marked (i.e. showing the results of step 36 in the flow chart shown in FIG. 4 during the n=2 iteration). The second convex hull contains four extended straight line portions which are marked f, e, d and a in the figure. In the n=2 iteration of step 37, the point on the histogram from which the longest possible line can be drawn to meet the second convex hull perpendicularly is determined. It is noted that the geometry of the extended line portion marked a in both FIGS. 5 a and 5 b is not affected by the additional condition applied to the second convex hull (i.e. that it pass through the histogram value at T1). Accordingly it is not necessary to re-calculate the perpendicular distances between this section of the convex hull and the histogram profile since those previously determined may be relied upon.
  • The longest perpendicular which can be drawn between the second convex hull and the histogram profile shown in FIG. 5 b connects to the straight line portion marked f and is indicated in the figure by the dotted-line marked g. This line intersects the histogram profile at signal value T2 which combines with T1 to divide the histogram into three ranges, one running between signal values L and T2, one running between signal values T2 and T1 and, as before, one running between signal values T1 and U. It is apparent from FIG. 5 b that the range between signal values T2 and T1 corresponds closely with the second voxel value sub-range II identified in the histogram indicated in FIG. 3. The setting of color boundaries (and their associated sharpness) which are linked to the second visualization threshold T2 will be understood from the above.
  • FIG. 5 c again shows the histogram previously shown in FIG. 3 but with the third (i.e. nth where n=3) convex hull associated with determining a third visualization threshold T3 also shown. The convex hull in FIG. 5 c passes through the histogram values at both T1 and T2 as well as the lower and upper end-points L and U and contains five extended straight line portions marked j, h, e, d, and a. The longest perpendicular which can be drawn between the third convex hull and the histogram profile connects to the straight line portion marked j and is indicated in the figure by the dotted-line marked k. This line intersects the histogram profile at signal value T3 which combines with T1 and T2 to divide the histogram into four ranges, one running between signal values L and T3, one running between signal values T3 and T2, one running between signal values T2 and T1 and one running between signal values T1 and U. It is apparent from FIG. 5 c that the ranges between signal values L and T3, and T3 and T2 corresponds closely with the fourth and third voxel value sub-ranges IV, III respectively identified in the histogram indicated in FIG. 3.
  • As noted above, in the example shown in FIG. 4 the iterative determination of visualization thresholds continues until a pre-set maximum number of associated boundaries have been determined, e.g. if there are five color ranges available for display then four color boundaries are required to define the associated five voxel value sub-ranges within the histogram. When the requisite number of boundaries are determined, the color mappings for displaying images derived from the VOI on which the histogram analysis has been performed can be generated by associating the shades available within each color range with the signal values defined by the color boundaries.
  • If there are P available color ranges for display and the VOI contains P or more than P distinct tissue types, the method outlined above automatically identifies P voxel value sub-ranges from the histogram of the voxels contained in the VOI. For a given type of data, an appropriate value of P (and hence number of visualization thresholds to be identified) can be selected based on the expected characteristics of the data set.
  • If, on the other hand, the VOI contains fewer than P distinct tissue types, the allotment of P color ranges will cause more than one color range to be allotted to at least one of the tissue types. A color boundary which is placed within a signal value range representing a single tissue type may appear confusing in the display, especially if the user is unaware of it. As noted above, color blending based on a sharpness value derived from the significance parameter for each visualization threshold can be used to de-emphasize such boundaries.
  • In addition to setting the sharpness value, the significance parameter may also form the basis of a significance test for determining the significance of a visualization threshold. The significance parameter may, for example, derive from the length of the determined longest perpendicular between the nth convex hull and the histogram. The significance test may require that this longest perpendicular is at least a pre-defined fraction of the histogram's characteristic dimensions. For instance, the significance test may require that the longest perpendicular be at least 5% of the geometric height or width of the histogram. In another example, the significance test may require that the longest perpendicular be at least 10% of the value of the appropriately normalized height and width of the histogram added in quadrature. The height and width may be differently normalized to provide different weighting. Whilst a default fraction, such as 5% or 10%, may be used, the fraction may also be changed to better suit a particular application and expected histogram characteristics. A determined visualization threshold which lies within a signal value range representing a single tissue type will fail an appropriately configured significance test. Boundaries associated with these visualization thresholds will be noted as cosmetic boundaries and will be ignored by selection tools.
  • FIG. 5 d shows the histogram previously shown in FIG. 3, but on which the fourth (i.e. nth where n=4) convex hull associated with attempting to find a fourth visualization threshold T4 is shown. The fourth convex hull meets the histogram at signal values T1, T2 and T3. As noted above, the three previously identified visualization thresholds, and hence associated color boundaries, define four voxel value sub-ranges in the histogram. As further noted above, the histogram example shown in FIG. 3 corresponds to a VOI containing four distinct tissue types and accordingly there are no more significant visualization thresholds to be identified. This is reflected by the fact that the longest perpendicular which can be drawn between the convex hull and the histogram shown in FIG. 5 d (marked by the line l) is relatively small compared with the lines c, g and k used to define the visualization thresholds T1, T2 and T3 and shown in FIGS. 5 a, 5 b and 5 c respectively. The visualization threshold T4 defined by the line l shown in FIG. 5 d is thus not significant and would fail an appropriately configured significance test.
  • Where a visualization threshold is deemed not to be significant, for example with reference to a model of a histogram's expected characteristics or by comparison with a pre-determined minimum length of longest perpendicular, it is can either be noted as such to provide a cosmetic color boundary in the color table (and given an appropriate sharpness value of, for example, 0), or it can be discarded. The concept of a cosmetic boundary is a boundary which is defined in the color table and thus relevant for display purposes, but which is ignored by other tools in the graphics system which are boundary sensitive, such as tools used for selecting objects that contain algorithms that automatically search for and mark boundaries. One or more cosmetic boundaries may be determined iteratively in the manner described above until sufficient number are determined to satisfy the requirements of the number of available displayable color ranges (i.e. j−1 color boundaries for j displayable color ranges).
  • If color boundaries associated with visualization thresholds which fail the significance test are to be discarded, rather than kept but marked as cosmetic, the iterative search for further visualization thresholds may cease after the first significance-test-failing visualization threshold is identified. In these circumstances there will be fewer identified voxel value sub-ranges in the histogram (which nominally correspond to fewer distinct tissue types within the VOI) than there are displayable color ranges. Individual color ranges may be allotted to the individual signal value ranges defined by the identified color boundaries and the remaining color ranges unused, or cosmetic boundaries can be artificially defined. For example, a cosmetic boundary could be generated by defining a color boundary mid-way between the two signal values of the most widely separated significant color boundaries. If multiple cosmetic boundaries are to be defined, the signal values with which to associate them can be determined serially, i.e. one after another using the above criterion, or in parallel such that they collectively divide the widest signal value range in to equal sections.
  • The significance of a given color boundary may be a continuous parameter and need not be a purely binary—e.g. a color boundary need not simply be significant or non-significant (i.e. non-cosmetic or cosmetic). As noted above, sharpness values derived form a significance parameter may be used as a direct measure of a boundary's significance such that in many cases the sharpness value itself will be used to directly indicate a boundary's significance. Accordingly, significance may be given an insignificant level (e.g. 0) and one or more levels of significance (e.g. integers between 1 and 100). This facility may be used by other graphics tools within the image processing system, for example to determine the probability of a boundary being significant when attempting to resolve conflicts when determining the bounds of a topological entity. A zero level of sharpness (i.e. significance) is set for boundaries which fail the significance test.
  • As indicated above, the visualization threshold's significance could be based on the length of the appropriately normalized length of the perpendicular between the nth convex hull and the histogram. By indicating to a user the significance of the identified color boundaries used in the display, the interpretation of the displayed image can be aided. The significance of each of the color boundaries may also play a role in the appropriate use of connectivity algorithms used to define surfaces or volumes within the 3D data set which are associated with features identified by a user in the projected image.
  • FIG. 6 schematically illustrates a general purpose computer 132 of the type that may be used to perform processing in accordance with the above described techniques. The computer 132 includes a central processing unit 134, a read only memory 136, a random access memory 138, a hard disk drive 140, a display driver 142 and display 144 and a user input/output circuit 146 with a keyboard 148 and mouse 150 all connected via a common bus 152. The central processing unit 134 may execute program instructions stored within the ROM 136, the RAM 138 or the hard disk drive 140 to carry out processing of signal values that may be stored within the RAM 138 or the hard disk drive 140. Signal values may represent the image data described above and the processing may carry out the steps described above and illustrated in FIG. 4. The program may be written in a wide variety of different programming languages. The computer program itself may be stored and distributed on a recording medium, such as a compact disc, or may be downloaded over a network link (not illustrated). The general purpose computer 132 when operating under control of an appropriate computer program effectively forms an apparatus for processing image data in accordance with the above described technique. The general purpose computer 132 also performs the method as described above and operates using a computer program product having appropriate code portions (logic) for controlling the processing as described above.
  • The image data could take a variety of forms, but the technique is particularly well suited to embodiments in which the image data comprises a collection of 2D images resulting from CT scanning, MRI scanning, ultrasound scanning or PET that are combined to synthesize a 3D object using known techniques. The aided visualization of distinct features within such images can be of significant benefit in the interpretation of those images when they are subsequently projected into 2D representations along arbitrarily selected directions that allow a user to view the synthesized 3D object from any particular angle they choose.
  • Having described the principles of a method for automatically determining presets, specific examples of its application will now be given. In some applications, presets may be required to satisfy further conditions before they are accepted for defining color range boundaries (or other visualization parameters). For example, in applications where bone visualization within a CT data set is of prime interest, preset thresholds should not be defined between those signal values representing soft tissue and blood vessels since both of these should be made transparent in the displayed image. Instead, less significant but more appropriate thresholds within the range of signal values representing bone may be preferred.
  • FIRST EXAMPLE Active MR Preset
  • Magnetic resonance imaging (MR) data sets are generally un-calibrated and display a wide range of data-values, dependent on, for example, acquisition parameter values or the position of the VOI with respect to a scanner's detector coils during scanning. Accordingly, it is not usually possible to pre-estimate suitable signal values with which to attribute color range presets and this makes the present invention especially useful for application to MR data sets.
  • FIG. 7 a schematically shows the appearance of a visualization state tool displayed on the display 144 shown in FIG. 6. The display tool shows a user the outcome of a preset determination method for a selected VOI. The visualization state tool comprises a data display window 80, a color display bar 72, a display of opacity values 74, a display of boundary positions 78, a display of sharpness values 76 and a number of display modification buttons 82. The color display bar identifies the five color ranges available for display in this application, although no details of the available shades within each range are shown. The display of boundary positions 78 shows the signal values of the four determined color boundaries. The individual display windows for each of these four boundary positions are centered beneath the two color ranges indicated on the color display bar 72 with which each is associated. The display of sharpness values 76 can be used to determine the significance of the four determined boundaries. The display of opacity values 74 at each of the boundary positions (and at the maximum signal value) is also shown (with example values 0, 0, 73, 90, 100).
  • The data display window shows a logarithmically scaled histogram of the signal values for the entire 3D data set overlaid with dashed and solid vertical lines marking the cosmetic and significant boundary positions respectively. The boundary lines may also be marked differently, for example based upon their relative significance. The histogram is colored to represent the color table and opacity mapping at each signal value indicated by the scale along the top of the data display window. The frequency distribution shown here is that of the entire image data set and is not restricted to the VOI on which the preset determination is based. In some cases it may aid a user's interpretation if the histogram of the selected VOI is shown. This might be in place of the histogram of the entire data set or indicated separately, such as in a separate window or drawn as a curve overlaying the entire image data set histogram. A curve is plotted overlaying the frequency distribution which shows the opacity curve. The opacity curve is formed by interpolation between the opacity values set for each of the color boundary positions taking into account the sharpness at each boundary, in this example 0, 0, 73, 90 and 100 for voxel values 11, 19, 102, 158 and the maximum voxel value respectively. The display modification buttons 82 allow a user to pan along the histogram shown in the display window and also allows for repositioning color boundaries if required.
  • The color boundaries shown in the display of boundary positions 78 in FIG. 7 a have been determined to attempt to best show what the user wants to see within the currently selected VOI. In some circumstances, for example when supplied with a VOI which includes the whole volume, the preset determination method might be configured to automatically assume that it is the tissue/air interface which is required to be visualized. Accordingly, after determining the presets, the voxels containing signal values below the lowest significant threshold will be assumed to represent air and be made transparent in the projection. If the VOI is selected such that only a small proportion of air is included, the preset determination method will look for a higher threshold of apparent significance to determine which voxels should be made transparent. As noted above, the intensities in MR vary substantially across the imaged volume due to coil positioning and other factors, so the smaller the VOI then the more accurate/useful the resulting visualization is likely to be.
  • When applied to an MR data set, the active preset determination method of this example first tries to find a candidate threshold, using the technique described above, and which further satisfies the condition that 60% (+/−30%) of the volume is transparent.
  • In this example, a suitable threshold is found to be at signal value 15. However, the design of the “Color/Opacity Settings” interpolation used by the particular display software used in this example operates best if two boundaries are placed a small distance on either side of the computed background threshold in order to provide a rapid rise in the opacity curve as usually desired. In this example, these two boundaries are placed at positions ±4 signal value units either side of the visualization threshold at signal values of 11 and 19 respectively. The boundaries may also be placed at other positions, for example, at positions ±3 signal value units either side of the visualization threshold.
  • Since in this example there are five color ranges to allot (requiring four color boundaries), the method looks for up to two more candidate visualization thresholds above the background level, at which to place the two remaining color boundaries. If two significant visualization thresholds cannot be found, then the missing color boundaries are placed in the center of the largest gap, but, as noted above, the associated sharpness (i.e. significance) is set to 0, indicating a cosmetic boundary with no significance to selection.
  • In this example, a second visualization threshold at signal value position 102 is determined to be significant and a single color boundary with a sharpness set to 5 is defined at signal value position 102. No further significant visualization thresholds are found and the remaining color boundary, between yellow and white, is placed at signal value position 158. It should be noted that, whilst not immediately apparent from the histogram shown in the data display window 80, this cosmetic boundary is in the middle of the widest gap between significant color boundaries. There are two reasons why it is perhaps not immediately apparent. Firstly, the histogram upon which the analysis is made differs from the histogram shown in the data display window since the former is restricted to the VOI and un-sculpted domain whereas the latter represents the entire 3D data set. Secondly, as noted above, in order to prevent the analysis from being thrown off by spurious values in the tails of the distribution, the lowest and highest 0.1% of voxels are excluded from the numerical analysis. Because the histogram shown in the dialog has a logarithmic vertical scale the voxels at the extremes of the voxel value range can appear more significant.
  • FIG. 7 b shows an example image displayed according to the automatic preset of FIG. 7 a. The air/tissue interface is shown with regions of skin, bone and soft tissue are apparent in the image.
  • SECOND EXAMPLE Active CT Preset
  • In this example, the same preset type is used as in the first example. This may be useful for CT data sets in which a user wants to visualize soft tissue.
  • THIRD EXAMPLE Active Bone (CT) Preset
  • This example is for use on CT data sets for the purpose of visualizing and selecting bone.
  • FIG. 8 a schematically shows the appearance of a visualization state tool presenting an example of use of the “Active Bone (CT)” Preset. The different fields within the visualization state tool shown in FIG. 8 a will be understood from the description of FIG. 7 a.
  • The “Active Bone (CT) Preset” operates by determining a first significant visualization threshold within the signal value range 70 HU to 270 HU. In the example, a visualization threshold value of 182 HU is determined. If no such visualization threshold is found then 170 HU is used. This first visualization threshold is used to set the background level in the display software by setting two boundary positions at ±45 HU from the first visualization threshold. The boundary positions in this example are accordingly at 137 HU and 227 HU. With the five available ranges of color indicated in FIG. 8 a there are two remaining presets to determine. One of these is placed at −500 HU (which, in the “Active Bone” scheme, is denoted as a significant boundary with a sharpness of 5 to show some information about soft tissue in the side multi-planar reconstruction (MPR) views. A fourth color boundary is placed at 600 HU to give some intensity information. The fourth color boundary is ascribed a sharpness value of 0 to denote a cosmetic boundary.
  • FIG. 8 b shows an example image displayed according to the automatic preset of FIG. 8 a. Regions of bone are most apparent in the image.
  • FOURTH EXAMPLE Active Angio (CT) Preset
  • This preset assumes the data are in correctly calibrated Hounsfield units. The purpose of this preset is to visualize angio-tissue.
  • FIG. 9 a schematically shows the appearance of a visualization state tool presenting an example of use of the “Active Angio (CT)” Preset. The different fields within the visualization state tool shown in FIG. 9 a will be understood from the description of FIG. 7 a.
  • The yellow/white boundary is placed at a position determined by the histogram analysis within the range 550 HU±200 HU, and this boundary is given a sharpness of 5 so that is significant to selection. In the example a boundary position of 550 HU is determined. Thus, in good data sets, the selection tools can be used in conjunction with this preset to discriminate bone from contrast enhanced vasculature.
  • The other boundary positions are fixed at values −500 HU, 105 HU and 195 HU.
  • FIG. 9 b shows an example image displayed according to the automatic preset of FIG. 9 a. Regions of bone and angio-tissue are most apparent in the image.
  • Summary
  • It should be clear from the above examples that the preset determination method of the present invention can be specifically tailored in any number of ways to apply to data sets with known specific characteristics. The method can also be used as an entirely general tool with no prior knowledge of the data set, such as in the MR example described above. In addition, users may themselves customize the preset determination to suit the requirements of a particular study. This might be done, for example, where CT calibrated data are used and a user requires features with a particular X-ray attenuation to be identified. This might also be done to distinguish between two tissue types of similar X-ray attenuations. Similarly, with un-calibrated data, a user might modify the preset determination method based on the appearance of a single 2D projection so that the preset is applied consistently to all 2D projections generated from that voxel data set. By storing parameters associated with a user's personal customizations to the method (such as the significance test stringency, the typical fraction of the data set the user wants to appear as transparent, or specific signal value ranges in which thresholds should occur), the modified method could be consistently applied to further data sets.
  • While by way of example the foregoing description is directed mainly towards determining visualization thresholds which are used to define color boundaries, it will be appreciated that determined visualization thresholds are equally suitable for defining boundaries for visualization parameters other than color, such as opacity, that are relevant for rendering. Furthermore, it is often clinically useful for the placement of boundaries in an opacity mapping to be positioned at the same signal values as the boundaries in a color mapping. Other visualization parameters for which the invention could be used include rate of change of color with signal value, rate of change of opacity with signal value, and segmentation information.
  • The preset determination is also not limited to finding any particular number of boundaries, and the associated number of visualization thresholds, but is extendable to determining any number of boundaries or visualization thresholds. In some applications it may be appropriate to determine more visualization thresholds than there are distinct boundaries required to allow less significant thresholds to play a role in defining the specific allocation of available color shades or transitions between colors within one or more of the determined ranges, for example.
  • Although in the above examples, the visualization parameter boundaries are determined automatically, in other cases, some level of user input can assist in determining the most appropriate conditions for displaying an image. This is because once an automatic preset has been determined it may be desirable to make an assumption regarding what aspects of the data a user is interested in seeing in a displayed image. For example, in the histogram of CT data shown in FIG. 3, four tissue types are identified. As previously noted, it might reasonably be inferred that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub-range III corresponds to soft tissue and sub-range IV represents the background tissue type or air. Once an automatic preset has been determined which identifies the four sub-ranges, an assumption might be made that the user does not wish to view the data corresponding to sub-range IV (background tissue type and air) and so voxels corresponding to this region will be rendered transparent. The displayed image will then show the bone, blood and soft tissue. However, in some situations a user may be interested in viewing bone and blood only with soft tissue rendered transparent. In other situations, the user might wish to view the background tissue and so it should not be rendered transparent. To address this, in some embodiments of the invention the user may be invited to identify in a displayed 2D image one or more examples of areas which are of interest and should be rendered visible, and one or more examples of areas which are not of interest and which should be rendered transparent. The user might identify such example areas by moving a cursor to appropriate parts of a displayed image and selecting the examples by “clicking” with a mouse-like pointer, for example. Once the example areas have been identified, it is possible to determine which sub-ranges they fall within and so set appropriate display conditions for these sub-ranges (e.g. transparent or not-transparent).
  • In addition to employing user supplied examples of tissue types which are and which are not of interest to assist in displaying images in conjunction with the above described automatic preset determination, such techniques can also be applied more generally to classify different tissue types in medical image volume data.
  • The technique can be particularly useful where different tissue types appear very similar in the data, for example because they have similar X-ray stopping powers for CT data. In the histogram shown in FIG. 3, some of the sub-ranges may contain two subtly different tissue types, for example, sub-range I may include distinct regions of bone having subtly different densities from each other. Another example is identification of tumors in organs such as the liver or brain. It can be difficult to properly classify voxels in the volume data which correspond with these different tissue types due to the similarity in the signal values associated with them.
  • FIG. 10 shows an example screen shot of a display 101 of a 2-D image generated from a volume (i.e. 3-D) data set. A main image 100 displays a 2-D image rendered from the volume data. The main image 100 shown in the figure includes a partial wire-frame cuboid to assist a user in interpreting the orientation of the image with respect to the original volume data, and some basic textual information, such as the date and time. The display 101 also contains a sagittal section view 102, a coronal section view 104, and a transverse section view 106 of the volume data to assist in diagnostic interpretation. A number of different tissue types, for example corresponding to bone and brain, are seen in the image. The top portion of the skull has been sculpted away (i.e. rendered transparent) so that the underlying brain can be seen.
  • A user viewing the display shown in FIG. 10 may wish to sculpt away further material so that a particular tissue type of interest within the brain can be viewed. For example, the tissue type of interest might correspond to a feature the user has observed in one of the section views 102, 104, 106 displayed on the left of the display and wishes to examine further. In some cases, it can be difficult for a segmentation algorithm to properly separate voxels in the volume data which correspond to a region of interest (and so should be displayed) from other voxels which do not (and so should not be displayed, i.e. rendered transparent). If there are significant differences in the voxel values associated with voxels corresponding to different types of tissue, for example as seen for bone and soft tissue in a CT scan, it can be relatively easy to classify the voxels. However, in cases where there are more subtle differences between a tissue type of interest and surrounding tissue, segmentation algorithms can often fail to properly classify voxels corresponding to the different tissue types. If segmentation is performed on the basis of voxel values expected for voxels corresponding to the tissue type of interest, a carefully selected window of values needs to be defined. Voxels having values falling within the window are considered to correspond to the tissue type of interest, voxels having voxel values falling outside of the window are considered not to correspond to the tissue type of interest. However, it not an easy task for a segmentation algorithm to select an appropriate window width and this is generally done through a user interactively adjusting window parameters until satisfied with the desired appearance of a displayed image. The inherent subjectivity of this approach means the displayed image is inevitably based on a user's preconceptions of how the image should appear because there is a lack of objective selection as to which voxels correspond to the tissue type of interest and which do not. Furthermore, in some situations, for example in CT data where a tissue type of interest has a X-ray stopping power which is similar to that of surrounding tissue, the voxel values themselves may not discriminate strongly between different tissue types.
  • FIG. 11 is a flow chart schematically showing a method of identifying voxels in a medical image data set which correspond to a tissue type of interest according to an embodiment of the invention. It will be assumed by way of example that the method is executed in response to a user having being presented with the image shown in FIG. 11 identifying in the sagittal section view 102 an anomalous region of brain which appears slightly different to surroundings tissue and which he wants to examine further.
  • In this example the method is performed by a suitably programmed general purpose computer, such as that shown in FIG. 6. The computer may be a stand-alone machine or may form part of a network, for example, a Picture Archiving and Communication System (PACS) network.
  • In Step 111 of FIG. 11, input is received from the user which identifies (selects) voxels corresponding to the tissue type of interest. With reference to FIG. 6, this is conveniently performed by the user positioning a cursor (“pointer”) displayed on the screen 144 displaying the image 101 over a pixel corresponding to the tissue type of interest in one of the section views 102, 104, 106, the cursor being positioned by manipulation of the mouse 150. However, other input means, such as a light-pen, graphics tablet or track ball, for example, may equally be used to point to the tissue type of interest. Since in this example the user initially noticed the region he wishes to examiner further in the sagittal section view 102, it is assumed he positions the cursor over a pixel within the anomalous region in this view. If the region is also apparent in either of the other section views 104, 106, he may equally position the cursor over an appropriate pixel in those views. Once the cursor is positioned over a desired pixel, the user indicates his selection by pressing (“clicking”) a button on the mouse 150. Any other input means could equally be used. A voxel in the volume data corresponding to the selected pixel is then determined based on the plane of the section view within the volume data and the selected position within the section view. Depending on the displayed resolution of the section view, the selected pixel might span a number of voxels in the volume data. In this example, the voxel in which the selected pixel is situated is taken as the identified voxel. In other cases all of the voxels within a region of a predetermined size and shape surrounding a central selected voxel might be considered as being identified as corresponding to the tissue type of interest. The user may identify any number of further voxels by clicking elsewhere in the sagittal or other section views. The user may change the particular displayed sagittal, coronal and/or transverse section views to allow for voxels identifying the tissue type of interest to be selected from anywhere within the volume data. Typically five or so voxels corresponding to the tissue type of interest might be identified, though fewer or more may be preferred. These voxels will be referred to as positively selected voxels and the process of identifying them will be referred to as making a positive selection.
  • It will be appreciated that other schemes for allowing a user to identify voxels can also be used. For example, rather than “click” on an individual pixel in one of the section views, a range of pixels could be identified by a user “clicking” twice to identify opposite corners of a rectangle, or a centre and circumference point of circle, or by defining a shape in some other way. Voxels corresponding to pixels within the perimeter of the shape may then all be deemed to have been identified.
  • In Step 112, input is received from the user which identifies (selects) voxels not corresponding to the tissue type of interest. Step 112 may be performed in a manner which is similar to Step 111 described above, but in which the user positions the cursor over pixels in the sagittal, coronal and/or transverse sections which do not correspond to the tissue type of interest. The user may indicate his selection by “clicking” a different mouse button to that used to identify the positively selected voxels. Alternatively, the same mouse button might be used in parallel with the pressing of a key on the keyboard 148.
  • To allow subtly different tissue types to be distinguished, the user should identify voxels which are most similar to the tissue type of interest, but which he wants to exclude nonetheless. This is because voxels which differ more significantly from voxels corresponding to the tissue type of interest are easier to classify as not being of interest. In this case, where the tissue type of interest is an anomalous region of brain which appears slightly different from its surroundings in the sagittal section view 102, the user should identify voxels by selecting pixels in the area surrounding the anomalous region. However, if there are other regions which also appear similar to the tissue type of interest, but which are not necessarily in close proximity to it, the user may also identify some voxels corresponding to these regions. For example, five or so voxels not corresponding to the tissue type of interest might be identified. However, as few as one or many more than five may also be chosen. For example, if there are a number of regions in the data appearing only slightly different from the tissue type of interest, the user may choose to identify a number of voxels in each of these regions. The voxels identified in Step 112 will be referred to as negatively selected voxels, and the process of identifying them will be referred to as making a negative selection.
  • In Step 113 one or more characterizing parameters are computed for each of the voxels selected in Steps 111 and 112. In this example implementation four characterizing parameters, namely voxel value V, a local average A, a local standard deviation σ and maximum Sobel edge filter response S over all orientations, are determined for each voxel. In another embodiment, instead of maximum Sobel edge filter response, gradient magnitude is used. In this case the local average and standard deviation are computed for a 5×5×5 cube of voxels centered on the particular voxel at hand. However, other regions may also be used. For example, a smaller regions may be considered for faster performance. Furthermore, the regions need not be three-dimensional, a 5×5 square of voxels, or other region, in an arbitrarily chosen or pre-determined plane may equally be used.
  • In Step 114 the distribution of computed characterizing parameters are analyzed to determine which of them may be used to distinguish between the positively selected and the negatively selected voxels.
  • FIGS. 12A-12D show example distributions of voxel value V, local average A, local standard deviation σ and maximum Sobel edge filter response S respectively for five positively selected and five negatively selected voxels. In each case, the values for the positively selected voxels are marked by “plus” symbols above the horizontal line representing the range of values of the particular characterizing parameter at appropriate positions along the line. The values for the negatively selected voxels are similarly represented by “minus” symbols below the line.
  • It can be seen from FIG. 12A that the voxel values V are similar and fall within roughly the same range for both the positively and negatively selected voxels. This indicates that voxel value itself is not a good discriminator between the positively and negatively selected voxels in this case.
  • It can be seen from FIG. 12B that the local averages A are also broadly similar for both the positively and negatively selected voxels. There appears to be a slight bias towards higher values of local average for positively selected voxels, but there is still a large degree of overlap.
  • However, it can be seen from FIG. 12C that the computed local standard deviations σ are significantly different for the positively and negatively selected voxels. In particular, the regions surrounding the positively selected voxels tend to have significantly larger standard deviations than those surrounding the negatively selected voxels. This indicates that the positively selected voxels from the region of tissue type which the user wishes to examine further correspond to regions of greater granularity in the data. It is likely to be this greater degree of granularity which causes the region to appear to human visual perception to be slightly different to the surrounding regions in the section views.
  • It can be seen from FIG. 12D that the computed maximum Sobel edge filter responses S are also different for the positively and negatively selected voxels, although to a lesser extent than the local standard deviations.
  • From these distributions of the computed characterizing parameters for the positively and negatively selected voxels, it is apparent that local standard deviation σ is a characterizing parameter which distinguishes well between positively and negatively selected voxels, and as such is considered to be a distinguishing parameter. In this example implementation only one distinguishing parameter is sought and is chosen on the basis of it being the most able of the computed characterizing parameters to discriminate between the positively and negatively selected voxels. The ability of a given characterizing parameter to discriminate is referred to as its discrimination power and may be parameterized using conventional statistical analysis. In this example, this is done by separately calculating the average and the standard deviation of each characterizing parameter for the positively and the negatively selected voxels. The discriminating power of a given characterizing parameter is then taken to be the difference in the average for the positively and negatively selected voxels divided by the quadrature sum of their standard deviations. The charactering parameter having the greatest discriminating power is then taken to be the distinguishing parameter. As will be seen further below, in other examples multiple distinguishing parameters may be used, for example all characterizing parameters having a discriminating power greater than a certain level or a fixed number of characterizing parameters having the highest discriminating powers may be used.
  • In Step 115, the distinguishing parameter (i.e. local standard deviation σ in this case) is calculated for other voxels in the data. Although this may be done for all of the voxels, it may be more efficient to restrict the calculation to only a subset of voxels. For example, a conventional segmentation algorithm may first be applied to the data to identify which voxels belong to significantly different tissue types (e.g. bone or brain). Once this is done, the local standard deviation σ may then be calculated only for those voxels which have been classified by the conventional segmentation algorithm as corresponding to brain. This is because there would be no need to perform the computation for voxels which have already been distinguished from the tissue type of interest by the conventional segmentation algorithm. Alternatively, the calculation may only be made for voxels in a VOI identified by the user.
  • In step 116, the distinguishing parameter, i.e. the local standard deviation for the example characterizing parameter distributions seen in FIGS. 12A-D, is used to classify each of the other voxels. This is performed in this example by defining a critical local standard deviation σc (marked in FIG. 12C) between the average local standard deviation for the positively selected voxels and the average local standard deviation of the negatively selected voxels. If the local standard deviation computed in Step 115 for a particular voxel is greater than σc, the voxel is classified as belonging to the tissue type of interest. If the local standard deviation is less than σc, the voxel is classified as not belonging to the tissue type of interest.
  • It will be appreciated that although in this particular example the computed value of one of the characterizing parameters (local standard deviation) is itself identified as being able to distinguish between the tissue type of interest and surrounding tissue, this is a special example of the more general case in which a distinguishing functional relationship between characterizing parameters is identified. For example, for a particular tissue type interest it might be found the ratio of two different characterizing parameters has a greater discriminating power between positively and negatively selected voxels than either of the characterizing parameters themselves. A numerical example of how this can arise is if values generally between 2.5 and 3.5 (arbitrary unit) are found for one characterizing parameter for both positively and negatively selected voxels and values generally between 5 and 7 (arbitrary units) are found for another characterizing parameter, again for both positively and negatively selected voxels. Because of this, neither characterizing parameter alone is able to discriminate properly between positive and negatively selected voxels. However, if for the tissue type of interest the second characterizing parameter is always close to twice the value of the first, whereas for the negatively selected voxels, the two parameters are unrelated, a distinguishing function based on the ratio of the two parameters can be identified.
  • Depending on clinical application, additional requirements may be imposed on which voxels are to be considered to correspond to the tissue type of interest. For example, a requirement that the tissue type of interest forms a single volume may be made by applying a connectivity requirement. This would mean voxels which are not linked to the positively selected voxels by a chain of voxels classified as corresponding to the tissue type of interest will be classified as not corresponding to this tissue type, even if their distinguishing parameters are such that they would otherwise be considered to do so.
  • Once the voxels have been classified, the user may proceed to examine those corresponding to the tissue type of interest as desired. For example, the user may render an image showing only the tissue type of interest. In another example, the tissue type of interest may be shown in one color and other tissue types in other colors, that is to say the method shown in FIG. 11 may be used as the basis of calculating presets. This could be realized when a monochrome image of the brain is displayed. Here the classification could be used to distinguish between white and gray matter in the brain. Based on the classification, the gray matter is displayed shaded in a semi-transparent blue color wash. In a further example, the selected object can be measured in some way, for example the volume is calculated. Another example is that the unclassified parts (“don't want” regions) are “dimmed”, i.e. rendered semi-transparent.
  • In some examples, an image based on the distinguishing parameter itself (or a function thereof) may be rendered (e.g. using the distinguishing parameter as the imaged parameter in the rendering rather than voxel value). In the above described situation, rather than rendering an image based on voxel values in the image data set (i.e. X-ray stopping power for CT data), an image based on the local standard deviation for each of the voxels may be rendered instead. Ranges of color and/or opacity may be associated with different values of local standard deviation and an image rendered accordingly. Visualization presets for the rendered image may be calculated as previously described, for example. This approach can provide for a displayed image in which a user can easily distinguish the tissue type of interest from surrounding tissue because characteristics of the tissue type of interest which differentiates it from its surroundings are used as the basis for rendering the image.
  • Rather than display an image, the classification may be used in conjunction with conventional analysis techniques, for example to calculate the volume of the anomalous region corresponding to the tissue type of interest. It will of course be appreciated that in some cases a region of interest might be of interest merely because the user wishes to identify it so it can be discarded from subsequent display or analysis.
  • It is not necessary for the steps shown in FIG. 11 to be performed in the order shown. For example, Step 111 and Step 112 could be reversed, or even intertwined. That is so say, a user could identify some voxels which correspond to the tissue type of interest, then some voxels which do not correspond to the tissue type of interest, and then some more voxels corresponding to the tissue type of interest and so on (i.e. in effect cycle between Step 111 and Step 112).
  • Furthermore, the process may return to earlier steps during execution. For example, a user may be alerted at Step 114 if there are no characterizing parameters having a discriminating power above a predetermined level. In response to this, the user may choose to return to Step 111 and/or Step 112 to provide more examples. Alternatively, in such a circumstance the user may instead indicate that additional characterizing parameters should be determined and their discriminating powers examined, or may simply choose to proceed with the classification nonetheless.
  • The method shown in FIG. 11 may be modified in a number of ways. For example, rather than simply having a binary classification (i.e. classifying voxels as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest) a probability classification may be used. Each voxel may be attributed a likelihood of corresponding to the same tissue type as the positively selected voxels on the basis of how much its distinguishing parameter differs from those of the negatively selected voxels. In this scheme, a voxel having a local standard deviation of σ1 shown in FIG. 12C would be classified as having a greater probability of belonging to the population of voxels corresponding to the tissue type of interest than one having a local standard deviation of σ2.
  • Furthermore, more than one distinguishing parameter may be used for the classification. For example, if multiple parameters are identified in Step 114 as being capable of distinguishing between the positively and negatively selected voxels, these multiple distinguishing parameters may each then be computed for the other voxels in Step 115. The classification in Step 116 could then be based on a conventional multi-dimensional expectation maximization (EM) algorithm or other cluster recognition process which takes the distinguishing parameters computed for the positively and negatively selected voxels as seeds for defining for the populations of voxels (i.e. the population of voxels corresponding to the tissue type of interest and the population of voxels not corresponding to the tissue type of interest). Example classification schemes when the distinguishing function has two or more characterizing parameters are multivariant Gaussian maximum likelihood and k-nn (nearest neighbors).
  • The EM algorithm provides the distributions for the positive and negative cases which then allows, for each voxel, a probability to be determined that the voxel is a member of the population exemplified by the positively selected voxels, that is to say a probability that the voxel corresponds to the tissue type of interest. The EM algorithm may also provide an estimate of the overall fraction of voxels which are members of the population exemplified by the positively selected voxels. This information allows an image of the tissue type of interest to be rendered from the volume data in a number of ways.
  • One way is to render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, and render the remaining voxels using conventional techniques based on their voxel values (e.g. opacity to X-rays for CT data). The threshold level may be selected arbitrarily, for example at 50%, or may be selected such that the total number of voxels falling above the threshold level corresponds to the overall fraction of voxels which are members of the population exemplified by the positively selected voxels predicted by the EM algorithm.
  • Another way of generating an image showing the tissue type of interest would be to again render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, but to then render the remaining voxels based on their probability of corresponding to the tissue type of interest, rather than their voxel values. This provides a form of probability image from which a user can immediately identify the likelihood of individual areas being correctly classified as corresponding to the tissue type of interest.
  • In either case, where an image based on rendering of probabilities is displayed, the user may be presented with the opportunity of manually altering the threshold level. This allows the user to determine an appropriate compromise between including too many false negatives (i.e. voxels which do not correspond to the tissue type of interest) and excluding too many true positives (i.e. voxels which do correspond to the tissue type of interest).
  • It will be appreciated that in addition to the example characterizing parameters shown in FIGS. 12A-12D, there is a wide range of other parameters which may be used. For example, parameters based on local averages calculated over differently sized regions, parameters based on local gradients in voxel value, local spatial frequency components, and so on may all be used. It will also be appreciated that the choice of characterizing parameters to compute may depend on the type of data under study. For example, because MR data often show significant variations in sensitivity throughout a volume data set, absolute voxel value can be a poor indicator of tissue type in MR data. Because of this, characterizing parameters such as voxel value or local averages of voxel value might be excluded from use with MR data.
  • While the above description relates to a situation where a user is interested in further examining only a single tissue type, it will be understood that the method may equally be employed where a user wishes to identify multiple tissue types. This can be achieved by a user making positive selections for each of the different tissue types of interest in Step 111 shown in FIG. 11. Depending on the characteristics of the different tissue types of interest, there may be a unique distinguishing feature identified in step 114 that can be used to classify the voxels. However, in some cases it may be necessary to employ multiple distinguishing parameters with voxels classified on the basis of one or other of these. For example, if in addition to the positive selection of voxels corresponding to the anomalous region of brain discussed above, the user is also interested in further examination of a second anomalous region cited elsewhere in the brain, the user simply makes some positive selections of that region. If the second anomalous region is represented by voxels having voxels values which are generally higher than the negatively selected voxels, but having a similar local standard deviation, then, unlike the voxels in the first anomalous region, they cannot be classified on the basis of local standard deviation. This means in Step 114 both local standard deviation σ and voxel value V will be determined to be distinguishing parameters and both will be calculated in Step 115 for other voxels in the data. In Step 116, voxels may then be classified as corresponding to one of the tissue types of interest if either their local standard deviation is different to that of the negatively selected voxels (in which case they relate to the first anomalous region) or if their voxel value is different to that of the negatively selected voxels (in which case they relate to the second anomalous region).
  • The method may also be applied in an iterative manner. For example, following execution of the method shown in FIG. 11 a probability image showing the classification of the voxels may be displayed to the user. The user may then decide to refine the classification by re-executing the method on the basis of the probability image. This is a form of relaxation labeling and allows for additional spatial information to be exploited in each subsequent iteration.
  • In some implementations of the method, the computation of the distinguishing features may include additional analysis techniques to assist in the proper classification of voxels. For example, partial volume effects might cause a boundary between two types of tissue which are not of interest to be wrongly classified. If this is a concern in a particular situation, techniques such a partial volume filtering as described in WO 02/084594 [1] may be employed when computing the distinguishing parameters.
  • In cases where a user considers that the classification has not performed adequately, for example should one of the negatively selected voxels be attributed a high probability of being a member of the population exemplified by the positively selected voxels, further segmentation analysis techniques may be applied. For example conventional morphological segmentation algorithms may be applied to volume data representing the probability of each voxel comprising the volume data of corresponding to the tissue type of interest.
  • Additional user input may also be used to assist the classification process. In particular, the user input may additionally include clinical information, such as specification of tissue type or anatomical feature of interest. For example, the user input may adopt the paradigm “want that gray matter—don't want that white matter”, or “want that liver—don't want that other (unspecified) tissue”, or “want that (liver) tumor—don't want that healthy (liver) tissue”, or “want that (unspecified) tissue—don't want that fat tissue”. This user input can be done by appropriate pointer selection in combination with filling out a text label or selection from a drop down menu of options. Following this user input, the distinguishing function can then determined from the characterizing parameters having regard to the clinical information input by the user. For example, if the positively selected voxels are indicated as belonging to a tumor, local standard deviation may be preferentially selected as the distinguishing function, since this will be sensitive to the enhanced granularity that is an attribute of tumors.
  • In some clinical studies multiple volume data sets of a single patient may be available, for example from different imaging modalities or from the same imaging modality but taken at different times. If the images can be appropriately registered with one another, it is possible to classify voxels in one of these volume data sets on the basis of positively and negatively selected voxels in another. Distinguishing parameters may even be based on an analysis of voxels in one data set yet be used to classify voxels in another data set. This can help because with more information made available, it is more likely that a good distinguishing parameter can be found.
  • It will be appreciated that although particular embodiments of the invention have been described, many modifications/additions and/or substitutions may be made within the spirit and scope of the present invention.
  • Thus, for example, although the described embodiments employ a computer program operating on a general purpose computer, for example a conventional computer workstation, in other embodiments special purpose hardware could be used. For example, at least some of the functionality could be effected using special purpose circuits, for example a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC) or in the form of a graphics processing unit (GPU). Also, multi-thread processing or parallel computing hardware could be used for at least some of the processing. For example, different threads or processing stages could be used to calculate respective characterizing parameters.
  • References
    • [1] WO 02/084594 (Voxar Limited)

Claims (28)

1. A method of numerically processing a medical image data set comprising voxels, the method comprising:
(a) receiving user input to positively and negatively select voxels that are and are not of a tissue type of interest;
(b) determining a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and
(c) classifying further voxels in the medical image data set on the basis of the distinguishing function.
2. The method according to claim 1, further comprising presenting an example image representing the medical image data set to a user, wherein the user positions a pointer at locations in the example image to select corresponding voxels.
3. The method according to claim 2, wherein a selected voxel is taken to be a voxel whose coordinates in the data set map to the location of the pointer in the example image.
4. The method according to claim 2, wherein selected voxels are taken to be voxels in a region surrounding a voxel whose coordinates in the data set map to the location of the pointer in the example image.
5. The method of claim 1, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to the user.
6. The method of claim 1, further comprising rendering an image of the medical image data set, wherein the rendering is of a volume data set representing values of the distinguishing function.
7. The method according to claim 1, wherein at least one of the one or more characterizing parameters is a function of surrounding voxels.
8. The method of claim 1, further comprising further classifying voxels on the basis of the morphology of their respective classifications in the medical image data set.
9. The method of claim 1, wherein the distinguishing function is determined by computing the characterizing parameters for the selected voxels and taking as the distinguishing function the value of at least one characterizing parameter whose value depends on whether its associated voxel has been positively or negatively selected.
10. The method of claim 1, wherein voxels are classified as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest.
11. The method of claim 10, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to a user.
12. The method of claim 11, wherein voxels classified as not corresponding to the tissue type of interest are rendered as transparent.
13. The method of claim 11, wherein voxels classified as corresponding to the tissue type of interest are rendered as transparent.
14. The method of claim 11, wherein voxels classified as corresponding to the tissue type of interest are rendered in one range of displayable colors and voxels classified as not corresponding to the tissue type of interest are rendered in another range of displayable colors.
15. The method of claim 1, wherein voxels are classified by associating with them a probability that they correspond to the tissue type of interest.
16. The method of claim 15, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to a user.
17. The method of claim 16, wherein the rendering takes account of the classification by rendering a volume data set representing the probability that the voxels correspond to the tissue type of interest.
18. The method of claim 17, wherein voxels having a probability of corresponding to the tissue type of interest of less than a threshold level are rendered as transparent.
19. The method of claim 18, further comprising adjusting the threshold level and re-rendering the image.
20. The method of claim 17, wherein a pre-determined fraction of the voxels having the lowest probabilities of corresponding to the tissue type of interest are rendered as transparent.
21. The method of claim 1, wherein the user input includes clinical information regarding at least one of the positively and negatively selected voxels, and wherein the distinguishing function is determined from the characterizing parameters having regard to the clinical information.
22. The method of claim 21, wherein the clinical information specifies tissue type.
23. The method of claim 1, wherein the medical image data set comprises a data set representing the probability that the voxels correspond to a tissue type of interest determined in a previous iteration of the method.
24. The method of claim 1, wherein the user input is prompted by displaying an image to a user from a 3D data set comprising a plurality of voxels, each with an associated signal value.
25. The method of claim 24, wherein the displaying of the image to the user comprises:
selecting a volume of interest (VOI) within the 3D data set;
generating a histogram of signal values from voxels that are within the VOI;
applying a numerical analysis method to the histogram to determine a visualization threshold; and
setting at least one of a plurality of boundaries for a visualization parameter according to the visualization threshold.
26. A computer program product bearing computer readable instructions for performing the method of claim 1.
27. A computer apparatus loaded with computer readable instructions for performing the method of claim 1.
28. Apparatus for numerically processing a medical image data set comprising voxels, the apparatus comprising:
(a) storage from which a medical image data set may be retrieved;
(b) a user input device configured to receive user input to positively and negatively select voxels that are and are not of a tissue type of interest; and
(c) a processor configured to determine a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and to classify further voxels in the medical image data set on the basis of the distinguishing function.
US10/922,700 2002-08-05 2004-08-20 Displaying image data using automatic presets Abandoned US20050017972A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/922,700 US20050017972A1 (en) 2002-08-05 2004-08-20 Displaying image data using automatic presets

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10/212,363 US6658080B1 (en) 2002-08-05 2002-08-05 Displaying image data using automatic presets
US10/726,280 US20040170247A1 (en) 2002-08-05 2003-12-01 Displaying image data using automatic presets
US10/922,700 US20050017972A1 (en) 2002-08-05 2004-08-20 Displaying image data using automatic presets

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/726,280 Continuation-In-Part US20040170247A1 (en) 2002-08-05 2003-12-01 Displaying image data using automatic presets

Publications (1)

Publication Number Publication Date
US20050017972A1 true US20050017972A1 (en) 2005-01-27

Family

ID=29549654

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/212,363 Expired - Lifetime US6658080B1 (en) 2002-08-05 2002-08-05 Displaying image data using automatic presets
US10/726,280 Abandoned US20040170247A1 (en) 2002-08-05 2003-12-01 Displaying image data using automatic presets
US10/922,700 Abandoned US20050017972A1 (en) 2002-08-05 2004-08-20 Displaying image data using automatic presets

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/212,363 Expired - Lifetime US6658080B1 (en) 2002-08-05 2002-08-05 Displaying image data using automatic presets
US10/726,280 Abandoned US20040170247A1 (en) 2002-08-05 2003-12-01 Displaying image data using automatic presets

Country Status (1)

Country Link
US (3) US6658080B1 (en)

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059530A1 (en) * 2004-09-15 2006-03-16 E-Cast, Inc. Distributed configuration of entertainment devices
US20060262112A1 (en) * 2005-05-23 2006-11-23 Carnegie Mellon University System and method for three-dimensional shape generation from partial and incomplete views, and interactive design system using same
US20060279568A1 (en) * 2005-06-14 2006-12-14 Ziosoft, Inc. Image display method and computer readable medium for image display
US20070008317A1 (en) * 2005-05-25 2007-01-11 Sectra Ab Automated medical image visualization using volume rendering with local histograms
US20070236496A1 (en) * 2006-04-06 2007-10-11 Charles Keller Graphic arts image production process using computer tomography
US20070269117A1 (en) * 2006-05-16 2007-11-22 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US20070274583A1 (en) * 2006-05-29 2007-11-29 Atsuko Sugiyama Computer-aided imaging diagnostic processing apparatus and computer-aided imaging diagnostic processing method
US20080150937A1 (en) * 2006-12-21 2008-06-26 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
US20080231632A1 (en) * 2007-03-21 2008-09-25 Varian Medical Systems Technologies, Inc. Accelerated volume image rendering pipeline method and apparatus
US20080292166A1 (en) * 2006-12-12 2008-11-27 Masaya Hirano Method and apparatus for displaying phase change fused image
US20090028287A1 (en) * 2007-07-25 2009-01-29 Bernhard Krauss Methods, apparatuses and computer readable mediums for generating images based on multi-energy computed tomography data
US20090208082A1 (en) * 2007-11-23 2009-08-20 Mercury Computer Systems, Inc. Automatic image segmentation methods and apparatus
US20090324073A1 (en) * 2006-08-02 2009-12-31 Koninklijke Philips Electronics N.V. Method of rearranging a cluster map of voxels in an image
US20100088644A1 (en) * 2008-09-05 2010-04-08 Nicholas Delanie Hirst Dowson Method and apparatus for identifying regions of interest in a medical image
US20100194750A1 (en) * 2007-09-26 2010-08-05 Koninklijke Philips Electronics N.V. Visualization of anatomical data
US20110125016A1 (en) * 2009-11-25 2011-05-26 Siemens Medical Solutions Usa, Inc. Fetal rendering in medical diagnostic ultrasound
US20110176711A1 (en) * 2010-01-21 2011-07-21 Radu Catalin Bocirnea Methods, apparatuses & computer program products for facilitating progressive display of multi-planar reconstructions
EP2437218A1 (en) * 2010-09-29 2012-04-04 Canon Kabushiki Kaisha Medical system
US20120293507A1 (en) * 2010-01-15 2012-11-22 Hitachi Medical Corporation Ultrasonic diagnostic apparatus and ultrasonic image display method
US20120321160A1 (en) * 2011-06-17 2012-12-20 Carroll Robert G Methods and apparatus for assessing activity of an organ and uses thereof
US20130044927A1 (en) * 2011-08-15 2013-02-21 Ian Poole Image processing method and system
CN103222876A (en) * 2012-01-30 2013-07-31 株式会社东芝 Medical image processing apparatus, image diagnosis apparatus, computer system and medical image processing method
US20130322713A1 (en) * 2012-05-29 2013-12-05 Isis Innovation Ltd. Color map design method for assessment of the deviation from established normal population statistics and its application to quantitative medical images
US8751961B2 (en) * 2012-01-30 2014-06-10 Kabushiki Kaisha Toshiba Selection of presets for the visualization of image data sets
US8775510B2 (en) 2007-08-27 2014-07-08 Pme Ip Australia Pty Ltd Fast file server methods and system
CN104166958A (en) * 2014-07-11 2014-11-26 上海联影医疗科技有限公司 Area-of-interest displaying and operating method
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
US20150131772A1 (en) * 2013-11-08 2015-05-14 Kabushiki Kaisha Toshiba Medical image processing apparatus, x-ray computerized tomography apparatus, and medical image processing method
US20150317792A1 (en) * 2012-12-27 2015-11-05 Koninklijke Philips N.V. Computer-aided identification of a tissue of interest
US9355616B2 (en) 2007-11-23 2016-05-31 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US20160328631A1 (en) * 2015-05-08 2016-11-10 Siemens Aktiengesellschaft Learning-Based Aorta Segmentation using an Adaptive Detach and Merge Algorithm
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US20190057555A1 (en) * 2017-08-16 2019-02-21 David Bruce GALLOP Method, system and apparatus for rendering medical image data
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US20190251755A1 (en) * 2018-02-09 2019-08-15 David Byron Douglas Interactive voxel manipulation in volumetric medical imaging for virtual motion, deformable tissue, and virtual radiological dissection
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10692272B2 (en) 2014-07-11 2020-06-23 Shanghai United Imaging Healthcare Co., Ltd. System and method for removing voxel image data from being rendered according to a cutting region
US10846911B1 (en) * 2019-07-09 2020-11-24 Robert Edwin Douglas 3D imaging of virtual fluids and virtual sounds
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11090873B1 (en) * 2020-02-02 2021-08-17 Robert Edwin Douglas Optimizing analysis of a 3D printed object through integration of geo-registered virtual objects
US11183292B2 (en) 2013-03-15 2021-11-23 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US11183296B1 (en) 2018-02-09 2021-11-23 Robert Edwin Douglas Method and apparatus for simulated contrast for CT and MRI examinations
US20210373871A1 (en) * 2020-05-28 2021-12-02 Siemens Healthcare Gmbh Method for processing a medical data set by an edge application based on a cloud-based application
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11403809B2 (en) 2014-07-11 2022-08-02 Shanghai United Imaging Healthcare Co., Ltd. System and method for image rendering
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10229113A1 (en) * 2002-06-28 2004-01-22 Siemens Ag Process for gray value-based image filtering in computer tomography
US6658080B1 (en) * 2002-08-05 2003-12-02 Voxar Limited Displaying image data using automatic presets
US7868900B2 (en) * 2004-05-12 2011-01-11 General Electric Company Methods for suppression of items and areas of interest during visualization
US20050137477A1 (en) * 2003-12-22 2005-06-23 Volume Interactions Pte. Ltd. Dynamic display of three dimensional ultrasound ("ultrasonar")
US8340373B2 (en) * 2003-12-23 2012-12-25 General Electric Company Quantitative image reconstruction method and system
GB2415876B (en) * 2004-06-30 2007-12-05 Voxar Ltd Imaging volume data
JP2008509776A (en) * 2004-08-18 2008-04-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Apparatus for the evaluation of rotational X-ray projections
US7623250B2 (en) * 2005-02-04 2009-11-24 Stryker Leibinger Gmbh & Co. Kg. Enhanced shape characterization device and method
JP4105176B2 (en) * 2005-05-19 2008-06-25 ザイオソフト株式会社 Image processing method and image processing program
WO2008015592A2 (en) * 2006-07-31 2008-02-07 Koninklijke Philips Electronics N.V. A method, apparatus and computer-readable medium for scale-based visualization of an image dataset
CN100562291C (en) * 2006-11-08 2009-11-25 沈阳东软医疗系统有限公司 A kind of at CT treatment of picture device, method and system
US20080112531A1 (en) * 2006-11-15 2008-05-15 Juuso Siren Method and Assembly for CBCT Type X-Ray Imaging
EP2143076A2 (en) * 2007-03-30 2010-01-13 Koninklijke Philips Electronics N.V. Learning anatomy dependent viewing parameters on medical viewing workstations
JP2009011711A (en) * 2007-07-09 2009-01-22 Toshiba Corp Ultrasonic diagnosis apparatus
DE102007041108A1 (en) * 2007-08-30 2009-03-05 Siemens Ag Method and image evaluation system for processing medical 2D or 3D data, in particular 2D or 3D image data obtained by computer tomography
US20100130860A1 (en) * 2008-11-21 2010-05-27 Kabushiki Kaisha Toshiba Medical image-processing device, medical image-processing method, medical image-processing system, and medical image-acquiring device
US7983382B2 (en) * 2008-11-26 2011-07-19 General Electric Company System and method for material segmentation utilizing computed tomography scans
EP2441389A4 (en) * 2009-06-10 2017-03-01 Hitachi, Ltd. Ultrasonic diagnosis device, ultrasonic image processing device, ultrasonic image processing program, and ultrasonic image generation method
US8831328B2 (en) 2009-06-23 2014-09-09 Agency For Science, Technology And Research Method and system for segmenting a brain image
WO2012048295A2 (en) * 2010-10-07 2012-04-12 H. Lee Moffitt Cancer Center & Research Institute Method and apparatus for use of function-function surfaces and higher-order structures as a tool
JP6105903B2 (en) * 2012-11-09 2017-03-29 キヤノン株式会社 Image processing apparatus, image processing method, radiation imaging system, and program
CN104103083A (en) 2013-04-03 2014-10-15 株式会社东芝 Image processing device, method and medical imaging device
US11144184B2 (en) * 2014-01-23 2021-10-12 Mineset, Inc. Selection thresholds in a visualization interface
US10410398B2 (en) * 2015-02-20 2019-09-10 Qualcomm Incorporated Systems and methods for reducing memory bandwidth using low quality tiles
JP6687393B2 (en) * 2015-04-14 2020-04-22 キヤノンメディカルシステムズ株式会社 Medical image diagnostic equipment
WO2018173206A1 (en) * 2017-03-23 2018-09-27 株式会社ソニー・インタラクティブエンタテインメント Information processing device
EP3605469A4 (en) 2017-03-23 2021-03-03 Sony Interactive Entertainment Inc. Information processing device
US10580130B2 (en) * 2017-03-24 2020-03-03 Curadel, LLC Tissue identification by an imaging system using color information
CN113469194A (en) * 2021-06-25 2021-10-01 浙江工业大学 Target feature extraction and visualization method based on Gaussian mixture model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4922915A (en) * 1987-11-27 1990-05-08 Ben A. Arnold Automated image detail localization method
US4945478A (en) * 1987-11-06 1990-07-31 Center For Innovative Technology Noninvasive medical imaging system and method for the identification and 3-D display of atherosclerosis and the like
US6343936B1 (en) * 1996-09-16 2002-02-05 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination, navigation and visualization
US20020028008A1 (en) * 2000-09-07 2002-03-07 Li Fan Automatic detection of lung nodules from high resolution CT images
US6476810B1 (en) * 1999-07-15 2002-11-05 Terarecon, Inc. Method and apparatus for generating a histogram of a volume data set
US6514082B2 (en) * 1996-09-16 2003-02-04 The Research Foundation Of State University Of New York System and method for performing a three-dimensional examination with collapse correction
US20030095695A1 (en) * 2001-11-21 2003-05-22 Arnold Ben A. Hybrid calibration of tissue densities in computerized tomography
US6658080B1 (en) * 2002-08-05 2003-12-02 Voxar Limited Displaying image data using automatic presets

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4945478A (en) * 1987-11-06 1990-07-31 Center For Innovative Technology Noninvasive medical imaging system and method for the identification and 3-D display of atherosclerosis and the like
US4922915A (en) * 1987-11-27 1990-05-08 Ben A. Arnold Automated image detail localization method
US6343936B1 (en) * 1996-09-16 2002-02-05 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination, navigation and visualization
US6514082B2 (en) * 1996-09-16 2003-02-04 The Research Foundation Of State University Of New York System and method for performing a three-dimensional examination with collapse correction
US6476810B1 (en) * 1999-07-15 2002-11-05 Terarecon, Inc. Method and apparatus for generating a histogram of a volume data set
US20020028008A1 (en) * 2000-09-07 2002-03-07 Li Fan Automatic detection of lung nodules from high resolution CT images
US20030095695A1 (en) * 2001-11-21 2003-05-22 Arnold Ben A. Hybrid calibration of tissue densities in computerized tomography
US6658080B1 (en) * 2002-08-05 2003-12-02 Voxar Limited Displaying image data using automatic presets

Cited By (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059530A1 (en) * 2004-09-15 2006-03-16 E-Cast, Inc. Distributed configuration of entertainment devices
US20060262112A1 (en) * 2005-05-23 2006-11-23 Carnegie Mellon University System and method for three-dimensional shape generation from partial and incomplete views, and interactive design system using same
US20070008317A1 (en) * 2005-05-25 2007-01-11 Sectra Ab Automated medical image visualization using volume rendering with local histograms
US7532214B2 (en) * 2005-05-25 2009-05-12 Spectra Ab Automated medical image visualization using volume rendering with local histograms
US20060279568A1 (en) * 2005-06-14 2006-12-14 Ziosoft, Inc. Image display method and computer readable medium for image display
US20070236496A1 (en) * 2006-04-06 2007-10-11 Charles Keller Graphic arts image production process using computer tomography
US8295620B2 (en) 2006-05-16 2012-10-23 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US20070269117A1 (en) * 2006-05-16 2007-11-22 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US8041129B2 (en) 2006-05-16 2011-10-18 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US20070274583A1 (en) * 2006-05-29 2007-11-29 Atsuko Sugiyama Computer-aided imaging diagnostic processing apparatus and computer-aided imaging diagnostic processing method
US8442282B2 (en) * 2006-05-29 2013-05-14 Kabushiki Kaisha Toshiba Computer-aided imaging diagnostic processing apparatus and computer-aided imaging diagnostic processing method
US8189929B2 (en) 2006-08-02 2012-05-29 Koninklijke Philips Electronics N.V. Method of rearranging a cluster map of voxels in an image
US20090324073A1 (en) * 2006-08-02 2009-12-31 Koninklijke Philips Electronics N.V. Method of rearranging a cluster map of voxels in an image
US20080292166A1 (en) * 2006-12-12 2008-11-27 Masaya Hirano Method and apparatus for displaying phase change fused image
US8086009B2 (en) * 2006-12-12 2011-12-27 Ge Medical Systems Global Technology Company, Llc Method and apparatus for displaying phase change fused image
US20080150937A1 (en) * 2006-12-21 2008-06-26 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
US7830381B2 (en) 2006-12-21 2010-11-09 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
US20080231632A1 (en) * 2007-03-21 2008-09-25 Varian Medical Systems Technologies, Inc. Accelerated volume image rendering pipeline method and apparatus
US20090028287A1 (en) * 2007-07-25 2009-01-29 Bernhard Krauss Methods, apparatuses and computer readable mediums for generating images based on multi-energy computed tomography data
US7920669B2 (en) * 2007-07-25 2011-04-05 Siemens Aktiengesellschaft Methods, apparatuses and computer readable mediums for generating images based on multi-energy computed tomography data
US9167027B2 (en) 2007-08-27 2015-10-20 PME IP Pty Ltd Fast file server methods and systems
US10038739B2 (en) 2007-08-27 2018-07-31 PME IP Pty Ltd Fast file server methods and systems
US11075978B2 (en) 2007-08-27 2021-07-27 PME IP Pty Ltd Fast file server methods and systems
US9860300B2 (en) 2007-08-27 2018-01-02 PME IP Pty Ltd Fast file server methods and systems
US11902357B2 (en) 2007-08-27 2024-02-13 PME IP Pty Ltd Fast file server methods and systems
US9531789B2 (en) 2007-08-27 2016-12-27 PME IP Pty Ltd Fast file server methods and systems
US8775510B2 (en) 2007-08-27 2014-07-08 Pme Ip Australia Pty Ltd Fast file server methods and system
US10686868B2 (en) 2007-08-27 2020-06-16 PME IP Pty Ltd Fast file server methods and systems
US11516282B2 (en) 2007-08-27 2022-11-29 PME IP Pty Ltd Fast file server methods and systems
US20100194750A1 (en) * 2007-09-26 2010-08-05 Koninklijke Philips Electronics N.V. Visualization of anatomical data
US9058679B2 (en) * 2007-09-26 2015-06-16 Koninklijke Philips N.V. Visualization of anatomical data
US10706538B2 (en) 2007-11-23 2020-07-07 PME IP Pty Ltd Automatic image segmentation methods and analysis
US9728165B1 (en) 2007-11-23 2017-08-08 PME IP Pty Ltd Multi-user/multi-GPU render server apparatus and methods
US8548215B2 (en) * 2007-11-23 2013-10-01 Pme Ip Australia Pty Ltd Automatic image segmentation of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
US11328381B2 (en) 2007-11-23 2022-05-10 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US20090208082A1 (en) * 2007-11-23 2009-08-20 Mercury Computer Systems, Inc. Automatic image segmentation methods and apparatus
US11315210B2 (en) 2007-11-23 2022-04-26 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11244650B2 (en) 2007-11-23 2022-02-08 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9984460B2 (en) * 2007-11-23 2018-05-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US11900608B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Automatic image segmentation methods and analysis
US9019287B2 (en) 2007-11-23 2015-04-28 Pme Ip Australia Pty Ltd Client-server visualization system with hybrid data processing
US10825126B2 (en) 2007-11-23 2020-11-03 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10762872B2 (en) 2007-11-23 2020-09-01 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US11900501B2 (en) 2007-11-23 2024-02-13 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US11640809B2 (en) 2007-11-23 2023-05-02 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US10380970B2 (en) 2007-11-23 2019-08-13 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US11514572B2 (en) 2007-11-23 2022-11-29 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10043482B2 (en) 2007-11-23 2018-08-07 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9355616B2 (en) 2007-11-23 2016-05-31 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9454813B2 (en) 2007-11-23 2016-09-27 PME IP Pty Ltd Image segmentation assignment of a volume by comparing and correlating slice histograms with an anatomic atlas of average histograms
US10614543B2 (en) 2007-11-23 2020-04-07 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US10430914B2 (en) 2007-11-23 2019-10-01 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US9595242B1 (en) 2007-11-23 2017-03-14 PME IP Pty Ltd Client-server visualization system with hybrid data processing
US9904969B1 (en) 2007-11-23 2018-02-27 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US20170011514A1 (en) * 2007-11-23 2017-01-12 PME IP Pty Ltd Automatic image segmentation methods and analysis
US10311541B2 (en) 2007-11-23 2019-06-04 PME IP Pty Ltd Multi-user multi-GPU render server apparatus and methods
US20100088644A1 (en) * 2008-09-05 2010-04-08 Nicholas Delanie Hirst Dowson Method and apparatus for identifying regions of interest in a medical image
US9349184B2 (en) 2008-09-05 2016-05-24 Siemens Medical Solutions Usa, Inc. Method and apparatus for identifying regions of interest in a medical image
US20110125016A1 (en) * 2009-11-25 2011-05-26 Siemens Medical Solutions Usa, Inc. Fetal rendering in medical diagnostic ultrasound
US8941646B2 (en) * 2010-01-15 2015-01-27 Hitachi Medical Corporation Ultrasonic diagnostic apparatus and ultrasonic image display method
US20120293507A1 (en) * 2010-01-15 2012-11-22 Hitachi Medical Corporation Ultrasonic diagnostic apparatus and ultrasonic image display method
US20110176711A1 (en) * 2010-01-21 2011-07-21 Radu Catalin Bocirnea Methods, apparatuses & computer program products for facilitating progressive display of multi-planar reconstructions
US8638992B2 (en) 2010-09-29 2014-01-28 Canon Kabushiki Kaisha Medical system
EP2437218A1 (en) * 2010-09-29 2012-04-04 Canon Kabushiki Kaisha Medical system
US9084563B2 (en) 2010-09-29 2015-07-21 Canon Kabushiki Kaisha Medical system
CN102525407A (en) * 2010-09-29 2012-07-04 佳能株式会社 Medical system
US20120321160A1 (en) * 2011-06-17 2012-12-20 Carroll Robert G Methods and apparatus for assessing activity of an organ and uses thereof
US8938102B2 (en) 2011-06-17 2015-01-20 Quantitative Imaging, Inc. Methods and apparatus for assessing activity of an organ and uses thereof
US9025845B2 (en) * 2011-06-17 2015-05-05 Quantitative Imaging, Inc. Methods and apparatus for assessing activity of an organ and uses thereof
US20130044927A1 (en) * 2011-08-15 2013-02-21 Ian Poole Image processing method and system
US8751961B2 (en) * 2012-01-30 2014-06-10 Kabushiki Kaisha Toshiba Selection of presets for the visualization of image data sets
CN103222876A (en) * 2012-01-30 2013-07-31 株式会社东芝 Medical image processing apparatus, image diagnosis apparatus, computer system and medical image processing method
US20130322713A1 (en) * 2012-05-29 2013-12-05 Isis Innovation Ltd. Color map design method for assessment of the deviation from established normal population statistics and its application to quantitative medical images
US9754366B2 (en) * 2012-12-27 2017-09-05 Koninklijke Philips N.V. Computer-aided identification of a tissue of interest
US20150317792A1 (en) * 2012-12-27 2015-11-05 Koninklijke Philips N.V. Computer-aided identification of a tissue of interest
US9524577B1 (en) 2013-03-15 2016-12-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US8976190B1 (en) 2013-03-15 2015-03-10 Pme Ip Australia Pty Ltd Method and system for rule based display of sets of images
US9509802B1 (en) 2013-03-15 2016-11-29 PME IP Pty Ltd Method and system FPOR transferring data to improve responsiveness when sending large data sets
US10540803B2 (en) 2013-03-15 2020-01-21 PME IP Pty Ltd Method and system for rule-based display of sets of images
US11763516B2 (en) 2013-03-15 2023-09-19 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11244495B2 (en) 2013-03-15 2022-02-08 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10631812B2 (en) 2013-03-15 2020-04-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11296989B2 (en) 2013-03-15 2022-04-05 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US11916794B2 (en) 2013-03-15 2024-02-27 PME IP Pty Ltd Method and system fpor transferring data to improve responsiveness when sending large data sets
US11183292B2 (en) 2013-03-15 2021-11-23 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US10373368B2 (en) 2013-03-15 2019-08-06 PME IP Pty Ltd Method and system for rule-based display of sets of images
US10764190B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US10762687B2 (en) 2013-03-15 2020-09-01 PME IP Pty Ltd Method and system for rule based display of sets of images
US11129578B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Method and system for rule based display of sets of images
US11701064B2 (en) 2013-03-15 2023-07-18 PME IP Pty Ltd Method and system for rule based display of sets of images
US10320684B2 (en) 2013-03-15 2019-06-11 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US10820877B2 (en) 2013-03-15 2020-11-03 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US10832467B2 (en) 2013-03-15 2020-11-10 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US11666298B2 (en) 2013-03-15 2023-06-06 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US11129583B2 (en) 2013-03-15 2021-09-28 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US10070839B2 (en) 2013-03-15 2018-09-11 PME IP Pty Ltd Apparatus and system for rule based visualization of digital breast tomosynthesis and other volumetric images
US9898855B2 (en) 2013-03-15 2018-02-20 PME IP Pty Ltd Method and system for rule based display of sets of images
US11810660B2 (en) 2013-03-15 2023-11-07 PME IP Pty Ltd Method and system for rule-based anonymized display and data export
US9749245B2 (en) 2013-03-15 2017-08-29 PME IP Pty Ltd Method and system for transferring data to improve responsiveness when sending large data sets
US20150131772A1 (en) * 2013-11-08 2015-05-14 Kabushiki Kaisha Toshiba Medical image processing apparatus, x-ray computerized tomography apparatus, and medical image processing method
US9846947B2 (en) * 2013-11-08 2017-12-19 Toshiba Medical Systems Corporation Medical image processing apparatus, X-ray computerized tomography apparatus, and medical image processing method
US10692272B2 (en) 2014-07-11 2020-06-23 Shanghai United Imaging Healthcare Co., Ltd. System and method for removing voxel image data from being rendered according to a cutting region
US11403809B2 (en) 2014-07-11 2022-08-02 Shanghai United Imaging Healthcare Co., Ltd. System and method for image rendering
CN104166958A (en) * 2014-07-11 2014-11-26 上海联影医疗科技有限公司 Area-of-interest displaying and operating method
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US9589211B2 (en) * 2015-05-08 2017-03-07 Siemens Healthcare Gmbh Learning-based aorta segmentation using an adaptive detach and merge algorithm
US20160328631A1 (en) * 2015-05-08 2016-11-10 Siemens Aktiengesellschaft Learning-Based Aorta Segmentation using an Adaptive Detach and Merge Algorithm
US11017568B2 (en) 2015-07-28 2021-05-25 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US9984478B2 (en) 2015-07-28 2018-05-29 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US10395398B2 (en) 2015-07-28 2019-08-27 PME IP Pty Ltd Appartus and method for visualizing digital breast tomosynthesis and other volumetric images
US11620773B2 (en) 2015-07-28 2023-04-04 PME IP Pty Ltd Apparatus and method for visualizing digital breast tomosynthesis and other volumetric images
US11599672B2 (en) 2015-07-31 2023-03-07 PME IP Pty Ltd Method and apparatus for anonymized display and data export
US10818101B2 (en) * 2017-08-16 2020-10-27 Synaptive Medical (Barbados) Inc. Method, system and apparatus for rendering medical image data
GB2570747B (en) * 2017-08-16 2022-06-08 Bruce Gallop David Method, system and apparatus for rendering medical image data
US20190057555A1 (en) * 2017-08-16 2019-02-21 David Bruce GALLOP Method, system and apparatus for rendering medical image data
GB2570747A (en) * 2017-08-16 2019-08-07 Bruce Gallop David Method, system and apparatus for rendering medical image data
US10573087B2 (en) 2017-08-16 2020-02-25 Synaptive Medical (Barbados) Inc. Method, system and apparatus for rendering medical image data
US20200193719A1 (en) * 2017-08-16 2020-06-18 Synaptive Medical (Barbados) Inc. Method, system and apparatus for rendering medical image data
US11669969B2 (en) 2017-09-24 2023-06-06 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10909679B2 (en) 2017-09-24 2021-02-02 PME IP Pty Ltd Method and system for rule based display of sets of images using image content derived parameters
US10878639B2 (en) * 2018-02-09 2020-12-29 David Byron Douglas Interactive voxel manipulation in volumetric medical imaging for virtual motion, deformable tissue, and virtual radiological dissection
US20190251755A1 (en) * 2018-02-09 2019-08-15 David Byron Douglas Interactive voxel manipulation in volumetric medical imaging for virtual motion, deformable tissue, and virtual radiological dissection
US11183296B1 (en) 2018-02-09 2021-11-23 Robert Edwin Douglas Method and apparatus for simulated contrast for CT and MRI examinations
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US10846911B1 (en) * 2019-07-09 2020-11-24 Robert Edwin Douglas 3D imaging of virtual fluids and virtual sounds
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11285674B1 (en) * 2020-02-02 2022-03-29 Robert Edwin Douglas Method and apparatus for a geo-registered 3D virtual hand
US11833761B1 (en) * 2020-02-02 2023-12-05 Robert Edwin Douglas Optimizing interaction with of tangible tools with tangible objects via registration of virtual objects to tangible tools
US11090873B1 (en) * 2020-02-02 2021-08-17 Robert Edwin Douglas Optimizing analysis of a 3D printed object through integration of geo-registered virtual objects
US20210373871A1 (en) * 2020-05-28 2021-12-02 Siemens Healthcare Gmbh Method for processing a medical data set by an edge application based on a cloud-based application
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Also Published As

Publication number Publication date
US6658080B1 (en) 2003-12-02
US20040170247A1 (en) 2004-09-02

Similar Documents

Publication Publication Date Title
US20050017972A1 (en) Displaying image data using automatic presets
Roettger et al. Spatialized transfer functions.
US7596267B2 (en) Image region segmentation system and method
JP5687714B2 (en) System and method for prostate visualization
EP3035287A1 (en) Image processing apparatus, and image processing method
US9159127B2 (en) Detecting haemorrhagic stroke in CT image data
US10290105B2 (en) Medical image processing apparatus to generate a lesion change site image
US8077948B2 (en) Method for editing 3D image segmentation maps
US20050147297A1 (en) Unsupervised data segmentation
Muigg et al. A four‐level focus+ context approach to interactive visual analysis of temporal features in large scientific data
WO2009138202A1 (en) Method and system for lesion segmentation
EP3796210A1 (en) Spatial distribution of pathological image patterns in 3d image data
US11058390B1 (en) Image processing via a modified segmented structure
EP2116973A1 (en) Method for interactively determining a bounding surface for segmenting a lesion in a medical image
US8705821B2 (en) Method and apparatus for multimodal visualization of volume data sets
CA2577547A1 (en) Method and system for discriminating image representations of classes of objects
WO2007041429A1 (en) Method and system for generating display data
US20100021031A1 (en) Method of Selecting and Visualizing Findings Within Medical Images
GB2416944A (en) Classifying voxels in a medical image
Patel et al. Moment curves
Coto et al. MammoExplorer: an advanced CAD application for breast DCE-MRI
US9767550B2 (en) Method and device for analysing a region of interest in an object using x-rays
KR20020079742A (en) Convolution filtering of similarity data for visual display of enhanced image
US11337670B1 (en) Method and apparatus for improving classification of an object within a scanned volume
CN114387380A (en) Method for generating a computer-based visualization of 3D medical image data

Legal Events

Date Code Title Description
AS Assignment

Owner name: VOXAR LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POOLE, IAN;BISSELL, ANDREW JOHN;REEL/FRAME:015280/0012

Effective date: 20040724

AS Assignment

Owner name: BARCOVIEW MIS EDINBURGH, A UK BRANCH OF BARCO NV,

Free format text: LICENSE;ASSIGNOR:VOXAR LIMITED;REEL/FRAME:017341/0732

Effective date: 20050104

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION