WO2004095378A1 - Combined 3d and 2d views - Google Patents

Combined 3d and 2d views Download PDF

Info

Publication number
WO2004095378A1
WO2004095378A1 PCT/IB2004/050501 IB2004050501W WO2004095378A1 WO 2004095378 A1 WO2004095378 A1 WO 2004095378A1 IB 2004050501 W IB2004050501 W IB 2004050501W WO 2004095378 A1 WO2004095378 A1 WO 2004095378A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
volume
selected point
area
processor
Prior art date
Application number
PCT/IB2004/050501
Other languages
French (fr)
Inventor
Arianne M. C. Van Muiswinkel
Ronaldus F. J. Holthuizen
Frank G. C. Hoogenraad
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2004095378A1 publication Critical patent/WO2004095378A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/028Multiple view windows (top-side-front-sagittal-orthogonal)

Definitions

  • the invention relates to a system for visualizing a three-dimensional (hereinafter "3D") volume, in particular for medical applications.
  • the invention also relates to software for use in such systems.
  • the invention further relates to a method of visualizing a three-dimensional 3D volume.
  • volumetric data sets are typically acquired using 3D scanners, such as CT (Computed Tomography) scanners or MR (Magnetic Resonance) scanners.
  • a volumetric data set consists of a three dimensional set of data, such as scalar or tensor values. The locations at which these values are given are called voxels, which is an abbreviation for volume elements. The value of a voxel is referred to as the voxel value.
  • Fig.l shows a cube 100 surrounded by eight voxels 110.
  • the data is acquired and stored slice by slice, where each slice is two-dimensional or the data is acquired in 3D, but stored slice by slice in the database.
  • the data in a slice can be represented as gray values.
  • a stack of slices with increasing 'depth' value form a 3D data set.
  • a user could browse through the respective slices and in this way view the data set.
  • flat 2D cross sections are used. In this way, 2D-slices are created that may lie in another direction than the originally sampled 2D slices. In principle, also other curved cross sections can be generated.
  • This technique enables an operator to view a 2D image independent of the direction in which the data was acquired.
  • digital processing hardware in the form of dedicated pre-programmed hardware or a programmable processor
  • 3D rendering algorithms project all or part of the 3D data field onto a two- dimensional screen.
  • the projected image is represented (e.g. displayed) to a user, giving a 3D view of the data set
  • the user can change the orientation/location of the 3D view (i.e. rotate and/or shift the volume).
  • the outcome of the post-processing may, for example, be the identification of one or more 3D objects, such as tissues, bones, or fibers, like nerves or blood vessels.
  • the 3D object can then be presented to the user, e.g. using some form of highhghting (e.g. changing intensity or using a different color in the view).
  • 3D views are a good way to present 3D volumes and objects in such volumes, a drawback is that a user can easily loose orientation. It is also more difficult to perform certain navigation and area selection tasks.
  • the system for visualizing a three- dimensional (hereinafter "3D") volume includes an input (810) for receiving a data set representing voxel values of the 3D volume; a memory (890) for storing the data set; a processor (860) for, under control of a computer program, for a selected point in the 3D volume generating a combined two-dimensional (hereinafter "2D") view and 3D view of the 3D volume, where the 2D view of the 3D volume includes at least one 2D slice through the 3D volume including the selected point and the 3D view of the volume is obtained by projecting at least part of the 3D volume including the selected point onto an imaginary 2D projection screen; and an output (840) for providing pixel values of combined 2D and 3D view for rendering.
  • 3D three- dimensional
  • the 2D slice has a predetermined orientation in the 2D view.
  • Well known and accepted orientations are the transversal, sagittal and coronal 2D view.
  • the 2D slice in the 2D view By keeping the 2D slice in the 2D view to this fixed orientation, the chances of mistakes are reduced. For example, a user may have rotated the 3D representation by 180 degrees in a certain direction and have forgotten that now in the 3D view left and right (or up and down) are reversed.
  • the orientation fixed in the 2D view the area of concern (that sometimes can be identified more easily in the 3D view) can now be also seen at the correct location and orientation in the 2D view.
  • the 2D view includes three orthogonal 2D slices, advantageously all in a predetermined and standard orientation.
  • navigation in the 2D view also results in a corresponding change in the 3D view.
  • this form of navigation is simple and easy to understand.
  • the navigation in the 2D view will be limited to moving the selected point and may not enable a rotation of the 3D object in the 3D view.
  • a 2D slice the intersection lines with the other two slices are shown in the 2D view.
  • An operator can easily navigate in two -dimensions by changing the location of the intersection lines. For example, the operator may drag one (or both) of the lines to a new location in the slice. He may also via a mouse-click indicate a new intersection point of the two lines.
  • a change in an intersection line results in selecting a new slice that intersects the slice, in which the operator performs the navigation, at the newly selected intersection. So, this will automatically result in an update of the 2D view. Since it also changes the main selected point (the intersection of all three slices) it will also result in a corresponding update of the 3D view.
  • the 3D view is a projection of the three 2D slices onto a 2D projection screen. It will be appreciated that preferably the orientation of the 2D slices in the 3D view is selectable (i.e. the 3D representation can be freely rotated in each direction).
  • the orientation and/or location of the 2D slices is indicated in the 3D view. So, if the 3D representation is rotated, the operator can see the 2D slices in the rotated position. By comparing this to the fixed form in the 2D view, the operator can more easily keep track of the exact location/orientation of an area of interest. Navigation is possible in the 3D view by rotating one or more of the indicated 2D slices or by selecting a different slice (e.g. by dragging the indicated 2D slice to a different location). The intersection point may also be moved, e.g. using cursor control.
  • the processor can identify a 3D object (e.g. tissue, bone, fiber, etc.) in the 3D volume and represent the object simultaneously in both views. For the 3D view this may be performed by a projection operation also used for projecting the 3D dataset (or subset thereof) onto a 2D screen. For the 2D view this may be performed by showing the intersection of the 3D object with the 2D slice or by projecting the 3D object onto the 2D slice.
  • an operator can select an area of interest in a 2D slice in the 2D view. This allows for a more accurate selection in a plane than is usually possible in a 3D projection.
  • the processor determines a corresponding 3D area (e.g. the same tissue as the identified area) or performs another form of processing that results in an identification of 3D objects relating to the selected 2D area (e.g. fibers that he in or cross the selected 2D area). The outcome is shown both views.
  • a human operator can select an area in the 3D view of 3D volume for further processing.
  • the further processing may, for example, result in the identification of a 3D object.
  • the processor determines for the selected 3D area and/or corresponding 3D object in the 3D volume a corresponding area in the 2D slice of the 2D view (or all 2D slices if there or more than one slice in the 2D view).
  • the processor then highlights the corresponding 2D area(s) in the 2D view.
  • the 2D areas may be created by projecting the 3D area onto the respective 2D slices.
  • Fig.l shows a voxel cube
  • Fig.2 shows a block diagram of the system according to the invention
  • Fig.3 gives a 2D illustration of re-sampling
  • Fig.4 illustrates a 3D volume visualization algorithm based on ray casting
  • Fig.5 shows a schematic example of the combined 2D view and 3D view
  • Fig.6 shows a combined 3D and 2D view for an actual MR scan
  • Fig.7 illustrates navigating through the 3D volume using sliders
  • Fig.8 illustrates representing a 3D object in the 2D view and the 3D view by projection
  • Fig.9 shows representing the 3D object in the 2D view through showing the intersection of the 3D object with the 2D slices.
  • Fig.2 shows a block diagram of the system according to the invention.
  • the system may be implemented on a conventional computer system such as a workstation or high-performance personal computer.
  • the system 200 includes an input 210 for receiving a three-dimensional set of data representing voxel values of the 3D volume.
  • the data may be supplied via a conventional computer network, such as Ethernet, or telecommunications network, either wired or wireless, or combinations thereof, or via computer peripherals for reading common information carriers for magnetic or optical recording such as tapes, CD's, DVD's and the like, including solid state memories, such as flash memory.
  • the image is acquired by an image acquisition device 220, such as a medical MR or CT scanner.
  • the system includes a storage 230 for storing the data set.
  • the storage is of a permanent type, such as a hard disc.
  • An output 240 of the system is used for providing pixel values for rendering. It may supply the image in any suitable form, for example as a bit- mapped image through a network to another computer system for display. Alternatively, the output may include a graphics card/chip set for direct rendering of the image on a suitable display 250.
  • the display may, but need not be part of the system.
  • a human operator may control the system via user input devices, such as a mouse 270 or a keyboard 280. Also other suitable means, such a voice control, may be used.
  • the system further includes a processor 260 for, under control of a computer program, processing the data set to obtain representations of the volume for rendering.
  • the program may be loaded from a permanent storage, such as storage 230, into a working memory 290, such a RAM for execution.
  • a working memory 290 such as a RAM for execution.
  • the same memory 290 may be used for storing the data from the storage 230 during execution. If the data set is too large to be fully stored in the main memory, the storage 230 may act as a virtual memory.
  • the processor 260 is operative to generate a two- dimensional (hereinafter "2D") view of the 3D volume in the form of a 2D slice through the volume.
  • 2D slice has a predetermined orientation in the 2D view so that the operator does not easily loose orientation.
  • the 2D view includes three orthogonal 2D slices through the 3D volume, preferably all with a standard orientation such as the transversal, sagittal and coronal 2D view. If the 3D data set has the same orientation as the 2D slices, the slices can simply be created by taking the corresponding 2D set of voxels for each 2D slice.
  • a 2D slice can also be created for a 2D slice in a different orientation than the 3D data set by 'resampling' voxels in the cross section from the neighboring voxels in the 3D data set.
  • Fig.3 gives a 2D illustration of taking samples for pixels of a 2D slice (shown as dots 310) taken along a pixel line. The rectangles represent voxels of the 3D volume. For a sample (pixel) to be derived, voxel values in the neighborhood of the sample position must be retrieved from the volume. The number of voxel values required depends on the extent of the interpolation function.
  • atri-linear interpolation function is used where the eight nearest voxels contribute to a sample, weighted based on the distance of the sample to the voxel.
  • the voxels that are accessed during linear interpolation of samples of one ray are shown using shading. The illustration is in 2D for simplicity. In practice the slice may not be in one of the planes of the 3D data set. In that case the re-sampling is three dimensional.
  • the processor also generates a 3D view of the volume by projecting at least part of the 3D volume onto an imaginary 2D projection screen.
  • Fig. 4 illustrates a sophisticated volume visualization algorithm that takes as input the entire 3D volume (or part of it) and projects this data field onto a two-dimensional screen 410.
  • Each projection is from a predetermined view point 420 that may be selectable by a user or may be dynamically changed (e.g. giving a virtual tour through the volume). This is achieved by casting a ray 430 from the view point through each pixel (i, j) of the imaginary projection screen and through the data field.
  • the data is re-sampled from neighboring voxels.
  • Various rendering algorithms are known for calculating a pixel value for pixel (i, j) in dependence on voxels that are near the ray locations in the volume. Examples of such rendering algorithms are volume rendering and iso-surface rendering. By observing a volume from a location sufficiently outside the volume, the ray casting can be represented by parallel rays projecting the image on a 2D screen. The accumulation of projection information is commonly performed through a function applied to samples taken along the projection rays.
  • Common volume visualization functions are average value, maximum value (the so-called Maximum Intensity Projection or MIP), minimum value and opacity blend (also referred to as alpha blending).
  • MIP Maximum Intensity Projection
  • opacity blend also referred to as alpha blending
  • Applying an interpolation filter function to the volume at the required sample position forms the samples required by the projection function.
  • the 3D view and 2D view are linked by both including a same selected point in the volume.
  • the part of the 3D volume that is projected onto the projection screen includes the selected point.
  • the 2D view this implies that at least one 2D slice in the 2D view, and preferably all orthogonal 2D slices in the view, include the selected point. In the latter case of three orthogonal 2D slices, the selected point is the intersection point of the shces.
  • Fig.5 shows an example of the combined 2D view 510 and 3D view 520.
  • the 2D view includes three orthogonal 2D shces 512, 514 and 516. The dot in the centre of each slice is the selected point 530.
  • the 3D view shows a 3D representation of the volume. In this preferred embodiment, the 3D view is obtained by projecting the three 2D shces onto the imaginary 2D projection screen. The intersection point of the three 2D slices is the selected point 530. It will be appreciated that also other 3D views are possible.
  • Fig.6 shows a similarly combined 3D and 2D view, now with actual MR scans of a human brain.
  • the 2D view 610 includes a transversal view slice 612, a sagittal view slice 614 and a coronal view slice 616.
  • the 3D view 620 is projection of all three 2D shces. It will be appreciated that the 3D view may be freely moved, rotated, mirrored, etc. whereas preferably the 2D slices stay in the same predetermined orientation.
  • a label a is used to indicate the two intersection lines with the other two 2D shces.
  • the common selected point is indicated using number 630.
  • all slices are the 'middle' shce. In reality this does not need to be the case.
  • the processor enables a human operator to navigate through the 3D volume by changing the selected point in a 2D slice of the 2D view. This can for example be done by moving in one of the 2D shces one of the two intersection lines where the other 2D slices intersect the 2D shce. This results in selecting the involved orthogonally intersecting slice that corresponds to the newly chosen intersection line position.
  • the moving of an intersection line may be done in any suitable way, including cursor control, or dragging the hne on the screen using, for example, a mouse, hi response to the new selection, the processor regenerates the 2D view (i.e. at least one of the 2D shces is changed) and regenerates the 3D view.
  • Fig. 6 also illustrates a preferred embodiment, wherein the 3D view includes an orientation and/or location of the 2D shces in the 3D view. In Fig.6 this is done in two ways using labels b and c. Labels b give the outline of the least easily identifiable shce 616. In principle also the other two slices can be indicated in the same way. Labels c indicate the intersection hnes of the shces.
  • the processor enables a human operator to navigate through the 3D volume by changing in the 3D view the selected point. This can be done by changing the orientation and/or location of least one of the 2D shces shown in the 3D view or moving the selected point. Suitable ways of doing this is using cursor control (e.g. using the left arrow, results in moving the selected point to the left, etc.), moving the outline b, or clicking on or near a shce, highlighting the slice and dragging the highlighted shce.
  • the processor is operative to process the 3D data set and identify 3D objects in the 3D volume represented by the 3D data set.
  • any suitable identification method may be used.
  • the processor may simply process the entire volume or only process a selected area.
  • the dataset may, for example, be processed to identify tissue types, bones, organs, etc.. This can very well be done on the entire volume.
  • the processing may also be to identify fibers, such as nerves, or blood vessels.
  • the processor represents the identified 3D object(s) in the both the 2D view and the 3D view.
  • the data set contains gray value voxels. The identification can be done by using colors or changing intensity levels, or other suitable highlighting techniques.
  • Fig. 8 illustrates the outcome of a preferred way of indicating the 3D object in the 3D view where the 3D object is projected onto the imaginary 2D projection screen in the same way as the 3D volume is represented in the 3D view.
  • the 3D volume is represented as three orthogonal 2D shces projected onto the screen.
  • the 3D object is taken as a 'full' 3D object, i.e. considering all voxels of the 3D object (or near enough to be part of the sampling function) and not just the voxels of the 3D object that are in or near the three 2D slices.
  • Fig.8 shows a 3D object created by fiber tracking of nerves in a brain scan.
  • Fig.8 also illustrates apreferred way of representing the 3D object in the 2D view by projecting the 3D object onto one (but preferably all) 2D shce(s) of the 2D view. In Fig. 8 all three projections are shown using labels d.
  • Fig.9 shows an alternative way of representing the 3D object in the 2D view by showing an intersection of the 3D object with the 2D shce.
  • the identified nerves are indicated using labels e.
  • the processor enables a human operator to select an area in the 3D volume for further processing by selecting an area in one or more 2D slice(s) of the 2D view. This selecting may, for example, be done by dragging the mouse over a part of the 2D slice shown on a display.
  • the processor determines for the selected area a corresponding area in the 3D volume. It may also determine a 3D object for the selected area, e.g. perform fiber tracking in the selected area
  • the processor then highlights the corresponding 3D area and/or corresponding 3D object in the 3D view by projecting the area/object onto the imaginary 2D screen.
  • the processor enables a human operator to select an area in the 3D view of 3D volume for further processing. It then determines for the selected area and/or corresponding 3D object in the 3D volume a corresponding area in one (but preferably all) 2D shce(s) of the 2D view and highlights the corresponding 2D area in the 2D view.

Abstract

A system for visualizing a 3D medical volume includes an input (210) for receiving a data set representing voxel values of the 3D volume and a memory (230, 290) for storing the data set. A processor (260) generates for a selected point (630) in the 3D volume generating a combined 2D view (610) and 3D view (620) of the 3D volume. The 2D view includes at least one 2D slice (612, 614, 616) through the 3D volume including the selected point. The 3D view of the volume is obtained by projecting at least part of the 3D volume including the selected point onto an imaginary 2D projection screen. An output (240) of the system is used for providing pixel values of the combined 2D and 3D view for rendering.

Description

Combined 3D and 2D views
FIELD OF THE INVENTION
The invention relates to a system for visualizing a three-dimensional (hereinafter "3D") volume, in particular for medical applications. The invention also relates to software for use in such systems. The invention further relates to a method of visualizing a three-dimensional 3D volume.
BACKGROUND OF THE INVENTION
For medical applications, volumetric data sets (3D data sets) are typically acquired using 3D scanners, such as CT (Computed Tomography) scanners or MR (Magnetic Resonance) scanners. A volumetric data set consists of a three dimensional set of data, such as scalar or tensor values. The locations at which these values are given are called voxels, which is an abbreviation for volume elements. The value of a voxel is referred to as the voxel value. Fig.l shows a cube 100 surrounded by eight voxels 110. Typically the data is acquired and stored slice by slice, where each slice is two-dimensional or the data is acquired in 3D, but stored slice by slice in the database. The data in a slice can be represented as gray values. A stack of slices with increasing 'depth' value form a 3D data set. Traditionally, a user could browse through the respective slices and in this way view the data set. It is also known to visualize the contents of a 3D data set by so-called multi-planar reformatting wherein arbitrary cross sections can be generated through the volumetric data by 're-sampling' voxels in the cross section from the neighboring voxels in the 3D data set. In most cases, flat 2D cross sections are used. In this way, 2D-slices are created that may lie in another direction than the originally sampled 2D slices. In principle, also other curved cross sections can be generated. This technique enables an operator to view a 2D image independent of the direction in which the data was acquired. With the increasing power of digital processing hardware (in the form of dedicated pre-programmed hardware or a programmable processor) it has become possible to exploit 3D rendering algorithms in actual systems for generating high-quality images from volumetric data sets. Such algorithms project all or part of the 3D data field onto a two- dimensional screen. The projected image is represented (e.g. displayed) to a user, giving a 3D view of the data set Usually, the user can change the orientation/location of the 3D view (i.e. rotate and/or shift the volume).
For many applications it is desired that a user can easily navigate through a 3D volume. It is also required that a user can easily select areas in a 3D volume for further processing. The outcome of the post-processing may, for example, be the identification of one or more 3D objects, such as tissues, bones, or fibers, like nerves or blood vessels. The 3D object can then be presented to the user, e.g. using some form of highhghting (e.g. changing intensity or using a different color in the view). Although 3D views are a good way to present 3D volumes and objects in such volumes, a drawback is that a user can easily loose orientation. It is also more difficult to perform certain navigation and area selection tasks.
SUMMARY OF THE INVENTION
It is an object of the invention to provide an improved user interface for visualizing a 3D volume. To meet the obj ect of the invention, the system for visualizing a three- dimensional (hereinafter "3D") volume, in particular for medical applications, includes an input (810) for receiving a data set representing voxel values of the 3D volume; a memory (890) for storing the data set; a processor (860) for, under control of a computer program, for a selected point in the 3D volume generating a combined two-dimensional (hereinafter "2D") view and 3D view of the 3D volume, where the 2D view of the 3D volume includes at least one 2D slice through the 3D volume including the selected point and the 3D view of the volume is obtained by projecting at least part of the 3D volume including the selected point onto an imaginary 2D projection screen; and an output (840) for providing pixel values of combined 2D and 3D view for rendering. By always showing at least one 2D slice in the traditional 2D view, the user can more easily maintain its orientation. The 2D and 3D view are linked by having a point in common.
As described in the dependent claim 2, the 2D slice has a predetermined orientation in the 2D view. Well known and accepted orientations are the transversal, sagittal and coronal 2D view. By keeping the 2D slice in the 2D view to this fixed orientation, the chances of mistakes are reduced. For example, a user may have rotated the 3D representation by 180 degrees in a certain direction and have forgotten that now in the 3D view left and right (or up and down) are reversed. By keeping the orientation fixed in the 2D view, the area of concern (that sometimes can be identified more easily in the 3D view) can now be also seen at the correct location and orientation in the 2D view. As described in the dependent claim 3, preferably the 2D view includes three orthogonal 2D slices, advantageously all in a predetermined and standard orientation.
According to the measure of the dependent claim 4, navigation in the 2D view also results in a corresponding change in the 3D view. In particular when the 2D view is in a fixed orientation, this form of navigation is simple and easy to understand. Normally, the navigation in the 2D view will be limited to moving the selected point and may not enable a rotation of the 3D object in the 3D view.
According to the measure of the dependent claim 5, in a 2D slice the intersection lines with the other two slices are shown in the 2D view. An operator can easily navigate in two -dimensions by changing the location of the intersection lines. For example, the operator may drag one (or both) of the lines to a new location in the slice. He may also via a mouse-click indicate a new intersection point of the two lines. A change in an intersection line results in selecting a new slice that intersects the slice, in which the operator performs the navigation, at the newly selected intersection. So, this will automatically result in an update of the 2D view. Since it also changes the main selected point (the intersection of all three slices) it will also result in a corresponding update of the 3D view.
According to the measure of the dependent claim 6, the 3D view is a projection of the three 2D slices onto a 2D projection screen. It will be appreciated that preferably the orientation of the 2D slices in the 3D view is selectable (i.e. the 3D representation can be freely rotated in each direction).
According to the measure of the dependent claim 7, the orientation and/or location of the 2D slices is indicated in the 3D view. So, if the 3D representation is rotated, the operator can see the 2D slices in the rotated position. By comparing this to the fixed form in the 2D view, the operator can more easily keep track of the exact location/orientation of an area of interest. Navigation is possible in the 3D view by rotating one or more of the indicated 2D slices or by selecting a different slice (e.g. by dragging the indicated 2D slice to a different location). The intersection point may also be moved, e.g. using cursor control.
According to the measure of the dependent claim 8, the processor can identify a 3D object (e.g. tissue, bone, fiber, etc.) in the 3D volume and represent the object simultaneously in both views. For the 3D view this may be performed by a projection operation also used for projecting the 3D dataset (or subset thereof) onto a 2D screen. For the 2D view this may be performed by showing the intersection of the 3D object with the 2D slice or by projecting the 3D object onto the 2D slice. According to the measure of the dependent claim 11, an operator can select an area of interest in a 2D slice in the 2D view. This allows for a more accurate selection in a plane than is usually possible in a 3D projection. The processor then determines a corresponding 3D area (e.g. the same tissue as the identified area) or performs another form of processing that results in an identification of 3D objects relating to the selected 2D area (e.g. fibers that he in or cross the selected 2D area). The outcome is shown both views.
According to the measure of the dependent claim 12, a human operator can select an area in the 3D view of 3D volume for further processing. The further processing may, for example, result in the identification of a 3D object. The processor then determines for the selected 3D area and/or corresponding 3D object in the 3D volume a corresponding area in the 2D slice of the 2D view (or all 2D slices if there or more than one slice in the 2D view). The processor then highlights the corresponding 2D area(s) in the 2D view. The 2D areas may be created by projecting the 3D area onto the respective 2D slices.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
Fig.l shows a voxel cube; Fig.2 shows a block diagram of the system according to the invention;
Fig.3 gives a 2D illustration of re-sampling;
Fig.4 illustrates a 3D volume visualization algorithm based on ray casting;
Fig.5 shows a schematic example of the combined 2D view and 3D view;
Fig.6 shows a combined 3D and 2D view for an actual MR scan; Fig.7 illustrates navigating through the 3D volume using sliders;
Fig.8 illustrates representing a 3D object in the 2D view and the 3D view by projection; and
Fig.9 shows representing the 3D object in the 2D view through showing the intersection of the 3D object with the 2D slices.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The system for visualizing volumes and a method of doing so will be described for medical applications. It will be appreciated that the system and method can also be applied to other applications as well, in general for inspection of the inner parts and structure of all objects which can be measured with a system, characterized by the fact that processing of the measurements results in a 3-dimensional dataset (3D array of measurements) representing a (part of the) volume of the object, and in which each data element or voxel relates to a particular position in the object and has a value which relates to one or more local properties of the object, for example for X-ray inspection of objects that can not be opened easily in the time available.
Fig.2 shows a block diagram of the system according to the invention. The system may be implemented on a conventional computer system such as a workstation or high-performance personal computer. The system 200 includes an input 210 for receiving a three-dimensional set of data representing voxel values of the 3D volume. The data may be supplied via a conventional computer network, such as Ethernet, or telecommunications network, either wired or wireless, or combinations thereof, or via computer peripherals for reading common information carriers for magnetic or optical recording such as tapes, CD's, DVD's and the like, including solid state memories, such as flash memory. In Fig.2, the image is acquired by an image acquisition device 220, such as a medical MR or CT scanner. Such acquisition device may be part of the system, but may also be external to the system. The system includes a storage 230 for storing the data set. Preferably, the storage is of a permanent type, such as a hard disc. An output 240 of the system is used for providing pixel values for rendering. It may supply the image in any suitable form, for example as a bit- mapped image through a network to another computer system for display. Alternatively, the output may include a graphics card/chip set for direct rendering of the image on a suitable display 250. The display may, but need not be part of the system. A human operator (user) may control the system via user input devices, such as a mouse 270 or a keyboard 280. Also other suitable means, such a voice control, may be used. The system further includes a processor 260 for, under control of a computer program, processing the data set to obtain representations of the volume for rendering. The program may be loaded from a permanent storage, such as storage 230, into a working memory 290, such a RAM for execution. In the example, the same memory 290 may be used for storing the data from the storage 230 during execution. If the data set is too large to be fully stored in the main memory, the storage 230 may act as a virtual memory.
According to the invention, the processor 260 is operative to generate a two- dimensional (hereinafter "2D") view of the 3D volume in the form of a 2D slice through the volume. 2. Preferably, the 2D slice has a predetermined orientation in the 2D view so that the operator does not easily loose orientation. In a preferred embodiment, the 2D view includes three orthogonal 2D slices through the 3D volume, preferably all with a standard orientation such as the transversal, sagittal and coronal 2D view. If the 3D data set has the same orientation as the 2D slices, the slices can simply be created by taking the corresponding 2D set of voxels for each 2D slice. By using the so-called multi-planar reformatting a 2D slice can also be created for a 2D slice in a different orientation than the 3D data set by 'resampling' voxels in the cross section from the neighboring voxels in the 3D data set. Fig.3 gives a 2D illustration of taking samples for pixels of a 2D slice (shown as dots 310) taken along a pixel line. The rectangles represent voxels of the 3D volume. For a sample (pixel) to be derived, voxel values in the neighborhood of the sample position must be retrieved from the volume. The number of voxel values required depends on the extent of the interpolation function. Typically, atri-linear interpolation function is used where the eight nearest voxels contribute to a sample, weighted based on the distance of the sample to the voxel. The voxels that are accessed during linear interpolation of samples of one ray are shown using shading. The illustration is in 2D for simplicity. In practice the slice may not be in one of the planes of the 3D data set. In that case the re-sampling is three dimensional.
According to the invention, the processor also generates a 3D view of the volume by projecting at least part of the 3D volume onto an imaginary 2D projection screen. Fig. 4 illustrates a sophisticated volume visualization algorithm that takes as input the entire 3D volume (or part of it) and projects this data field onto a two-dimensional screen 410. Each projection is from a predetermined view point 420 that may be selectable by a user or may be dynamically changed (e.g. giving a virtual tour through the volume). This is achieved by casting a ray 430 from the view point through each pixel (i, j) of the imaginary projection screen and through the data field. At discrete k locations 440, 442, 444, 446 along the ray, indicated in light gray, the data is re-sampled from neighboring voxels. Various rendering algorithms are known for calculating a pixel value for pixel (i, j) in dependence on voxels that are near the ray locations in the volume. Examples of such rendering algorithms are volume rendering and iso-surface rendering. By observing a volume from a location sufficiently outside the volume, the ray casting can be represented by parallel rays projecting the image on a 2D screen. The accumulation of projection information is commonly performed through a function applied to samples taken along the projection rays. Common volume visualization functions are average value, maximum value (the so-called Maximum Intensity Projection or MIP), minimum value and opacity blend (also referred to as alpha blending). Applying an interpolation filter function to the volume at the required sample position forms the samples required by the projection function. According to the invention, the 3D view and 2D view are linked by both including a same selected point in the volume. For the 3D view this implies that the part of the 3D volume that is projected onto the projection screen includes the selected point. For the 2D view this implies that at least one 2D slice in the 2D view, and preferably all orthogonal 2D slices in the view, include the selected point. In the latter case of three orthogonal 2D slices, the selected point is the intersection point of the shces. Fig.5 shows an example of the combined 2D view 510 and 3D view 520. The 2D view includes three orthogonal 2D shces 512, 514 and 516. The dot in the centre of each slice is the selected point 530. The 3D view shows a 3D representation of the volume. In this preferred embodiment, the 3D view is obtained by projecting the three 2D shces onto the imaginary 2D projection screen. The intersection point of the three 2D slices is the selected point 530. It will be appreciated that also other 3D views are possible.
Fig.6 shows a similarly combined 3D and 2D view, now with actual MR scans of a human brain. The 2D view 610 includes a transversal view slice 612, a sagittal view slice 614 and a coronal view slice 616. The 3D view 620 is projection of all three 2D shces. It will be appreciated that the 3D view may be freely moved, rotated, mirrored, etc. whereas preferably the 2D slices stay in the same predetermined orientation. In each of the 2D shces a label a is used to indicate the two intersection lines with the other two 2D shces. The common selected point is indicated using number 630. In Fig.6 all slices are the 'middle' shce. In reality this does not need to be the case. In a preferred embodiment, the processor enables a human operator to navigate through the 3D volume by changing the selected point in a 2D slice of the 2D view. This can for example be done by moving in one of the 2D shces one of the two intersection lines where the other 2D slices intersect the 2D shce. This results in selecting the involved orthogonally intersecting slice that corresponds to the newly chosen intersection line position. The moving of an intersection line may be done in any suitable way, including cursor control, or dragging the hne on the screen using, for example, a mouse, hi response to the new selection, the processor regenerates the 2D view (i.e. at least one of the 2D shces is changed) and regenerates the 3D view. It will be appreciated that changing an intersection hne effectively changes the main selected point 630 in common in all views. It will be appreciated that in a preferred embodiment the operator can also directly move the intersection point in one of the 2D shces, possibly resulting in selecting a new 2D shce for both other 2D shces (not for the 2D shce in which the selection is made). Fig. 6 also illustrates a preferred embodiment, wherein the 3D view includes an orientation and/or location of the 2D shces in the 3D view. In Fig.6 this is done in two ways using labels b and c. Labels b give the outline of the least easily identifiable shce 616. In principle also the other two slices can be indicated in the same way. Labels c indicate the intersection hnes of the shces. Preferably, the processor enables a human operator to navigate through the 3D volume by changing in the 3D view the selected point. This can be done by changing the orientation and/or location of least one of the 2D shces shown in the 3D view or moving the selected point. Suitable ways of doing this is using cursor control (e.g. using the left arrow, results in moving the selected point to the left, etc.), moving the outline b, or clicking on or near a shce, highlighting the slice and dragging the highlighted shce. Fig.7 shows an alternative way, using shders, one for each shce of the 3D view. Slider 710 corresponds to the sagittal view, where R=right and L=Left. Slider 720 corresponds to the coronal view, where A= Anterior (i.e. toe position) and P=posterior (i.e. heel position). Slider 730 corresponds to the transversal view, where F=feet position and H=Head position. In a preferred embodiment the processor is operative to process the 3D data set and identify 3D objects in the 3D volume represented by the 3D data set. In itself any suitable identification method may be used. The processor may simply process the entire volume or only process a selected area. The dataset may, for example, be processed to identify tissue types, bones, organs, etc.. This can very well be done on the entire volume. The processing may also be to identify fibers, such as nerves, or blood vessels. This is preferably done on selected areas since otherwise the amount of identified objects may become too large to be meaningful for a human to recognize and navigate through. Preferably, the processor represents the identified 3D object(s) in the both the 2D view and the 3D view. Normally, the data set contains gray value voxels. The identification can be done by using colors or changing intensity levels, or other suitable highlighting techniques.
Fig. 8 illustrates the outcome of a preferred way of indicating the 3D object in the 3D view where the 3D object is projected onto the imaginary 2D projection screen in the same way as the 3D volume is represented in the 3D view. In this 3D view, the 3D volume is represented as three orthogonal 2D shces projected onto the screen. Preferably, the 3D object is taken as a 'full' 3D object, i.e. considering all voxels of the 3D object (or near enough to be part of the sampling function) and not just the voxels of the 3D object that are in or near the three 2D slices. Fig.8 shows a 3D object created by fiber tracking of nerves in a brain scan. The resulting identified nerves are shown as a group of white fibers, identified by label d. Fig.8 also illustrates apreferred way of representing the 3D object in the 2D view by projecting the 3D object onto one (but preferably all) 2D shce(s) of the 2D view. In Fig. 8 all three projections are shown using labels d. Fig.9 shows an alternative way of representing the 3D object in the 2D view by showing an intersection of the 3D object with the 2D shce. Here the identified nerves are indicated using labels e.
In apreferred embodiment, the processor enables a human operator to select an area in the 3D volume for further processing by selecting an area in one or more 2D slice(s) of the 2D view. This selecting may, for example, be done by dragging the mouse over a part of the 2D slice shown on a display. The processor then determines for the selected area a corresponding area in the 3D volume. It may also determine a 3D object for the selected area, e.g. perform fiber tracking in the selected area The processor then highlights the corresponding 3D area and/or corresponding 3D object in the 3D view by projecting the area/object onto the imaginary 2D screen.
Alternatively, the processor enables a human operator to select an area in the 3D view of 3D volume for further processing. It then determines for the selected area and/or corresponding 3D object in the 3D volume a corresponding area in one (but preferably all) 2D shce(s) of the 2D view and highlights the corresponding 2D area in the 2D view.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims

CLAIMS:
1. A system for visualizing a three-dimensional (hereinafter "3D") volume, in particular for medical applications; the system including: an input (210) for receiving a data set representing voxel values of the 3D volume; a memory (230, 290) for storing the data set; a processor (260) for, under control of a computer program, for a selected point in the 3D volume generating a combined two-dimensional (hereinafter "2D") view and 3D view of the 3D volume, where the 2D view of the 3D volume includes at least one 2D shce through the 3D volume including the selected point and the 3D view of the volume is obtained by projecting at least part of the 3D volume including the selected point onto an imaginary 2D projection screen; and an output (240) for providing pixel values of the combined 2D and 3D view for rendering.
2. A system as claimed in claim 1, wherein the 2D slice has a predetermined orientation in the 2D view.
3. A system as claimed in claim 1, wherein the 2D view includes three orthogonal 2D slices through the 3D volume; the selected point being the intersection point of the three slices.
4. A system as claimed in claim 1, wherein the processor is operative to enable a human operator to navigate through the 3D volume by changing the selected point in a 2D shce of the 2D view and to regenerate the combined view in response to a change in the selected point.
5. A system as claimed in claims 3 and 4, wherein the processor is operative to indicate in at least a first one of the 2D shces in the 2D view respective intersection lines of the other 2D shces with the first 2D shce to enable a human operator to perform the navigation by changing a location of at least one of the intersection lines.
6. A system as claimed in claim 4, wherein the 3D view is obtained by projecting the three 2D shces onto the imaginary 2D projection screen.
7. A system as claimed in claim 5, wherein the 3D view includes an orientation and/or location of the 2D slices in the 3D view; the processor being operative to enable a human operator to navigate through the 3D volume by changing in the 3D view the selected point through changing the orientation and/or location of least one of the 2D slices or moving the selected point.
8. A system as claimed in claim 1, wherein the processor is operative to identify a 3D object in the 3D volume and to represent the identified 3D object in the 2D view and the 3D view.
9. A system as claimed in claim 8, wherein the 3D object is represented in the 3D view by projecting the 3D object onto the imaginary 2D projection screen.
10. A system as claimed in claim 8, wherein the 3D object is represented in the 2D view by projecting the 3D object onto a 2D slice of the 2D view or by showing an intersection of the 3D object with the 2D shce.
11. A system as claimed in claim 1 , wherein the processor is operative to enable a human operator to select an area in the 3D volume for further processing by: selecting an area in a 2D slice of the 2D view; determining for the selected area a corresponding area and/or corresponding 3D object in the 3D volume; and highlighting the corresponding 3D area and/or corresponding 3D object in the 3D view.
12. A system as claimed in claim 1, wherein the processor is operative to: enable a human operator to select an area in the 3D view of 3D volume for further processing; determine for the selected area and/or corresponding 3D object in the 3D volume a corresponding area in the 2D shce of the 2D view; and highhght the corresponding 2D area in the 2D view.
13. A method of visualizing a three-dimensional (hereinafter "3D") volume A system for visualizing a three-dimensional (hereinafter "3D") volume, in particular for medical applications; the system including: an input (810) for receiving a data set representing voxel values of the 3D volume; a memory (890) for storing the data set; the method including, for a selected point in the 3D volume, generating a combined two- dimensional (hereinafter "2D") view and 3D view of the 3D volume, where the 2D view of the 3D volume includes at least one 2D shce through the 3D volume including the selected point and the 3D view of the volume is obtained by projecting at least part of the 3D volume including the selected point onto an imaginary 2D projection screen; and outputting pixel values of the combined 2D and 3D view for rendering.
14. A computer program product operative to cause a processor to perform he method of claim 13.
PCT/IB2004/050501 2003-04-24 2004-04-23 Combined 3d and 2d views WO2004095378A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03101138.0 2003-04-24
EP03101138 2003-04-24

Publications (1)

Publication Number Publication Date
WO2004095378A1 true WO2004095378A1 (en) 2004-11-04

Family

ID=33305801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/050501 WO2004095378A1 (en) 2003-04-24 2004-04-23 Combined 3d and 2d views

Country Status (1)

Country Link
WO (1) WO2004095378A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1791092A2 (en) * 2005-11-23 2007-05-30 General Electric Company System and method for radiology decision support using picture-in-picture
EP1992289A1 (en) * 2006-03-09 2008-11-19 Imagnosis Inc. Medical 3-dimensional image display control program and medical 3-dimensional image display method
WO2009108179A2 (en) * 2007-12-20 2009-09-03 Bbn Technologies Corp. Motion-based visualization
US7629986B2 (en) 2003-11-05 2009-12-08 Bbn Technologies Corp. Motion-based visualization
US20090307628A1 (en) * 2008-06-09 2009-12-10 Metala Michael J Non-Destructive Examination Data Visualization and Analysis
WO2010072521A1 (en) * 2008-12-23 2010-07-01 Tomtec Imaging Systems Gmbh Method and device for navigation in a multi-dimensional image data set
US8941680B2 (en) 2008-07-09 2015-01-27 Raytheon Bbn Technologies Corp. Volumetric image motion-based visualization
US9058679B2 (en) 2007-09-26 2015-06-16 Koninklijke Philips N.V. Visualization of anatomical data
TWI624243B (en) * 2016-12-15 2018-05-21 神農資訊股份有限公司 Surgical navigation system and instrument guiding method thereof
WO2018178274A1 (en) * 2017-03-29 2018-10-04 Koninklijke Philips N.V. Embedded virtual light source in 3d volume linked to mpr view crosshairs
WO2018222471A1 (en) * 2017-05-31 2018-12-06 General Electric Company Systems and methods for displaying intersections on ultrasound images
JP2019158447A (en) * 2018-03-09 2019-09-19 東芝Itコントロールシステム株式会社 CT imaging device
US10692213B2 (en) 2015-03-10 2020-06-23 Koninklijke Philips N.V. Retrieval of corresponding structures in pairs of medical images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
US5986662A (en) * 1996-10-16 1999-11-16 Vital Images, Inc. Advanced diagnostic viewer employing automated protocol selection for volume-rendered imaging
EP1001376A2 (en) * 1998-11-12 2000-05-17 Mitsubishi Denki Kabushiki Kaisha Three-Dimensionale cursor for a real-time volume rendering system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
US5986662A (en) * 1996-10-16 1999-11-16 Vital Images, Inc. Advanced diagnostic viewer employing automated protocol selection for volume-rendered imaging
EP1001376A2 (en) * 1998-11-12 2000-05-17 Mitsubishi Denki Kabushiki Kaisha Three-Dimensionale cursor for a real-time volume rendering system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GERING D: "A System for Surgical Planning and Guidance using Image Fusion and Interventional MR", December 1999, MASTER'S THESIS, MASSACHUSETTS INSTITUTE OF TECHNOLOGY, XP002293852 *
GOLLAND P ET AL: "Anatomy Browser: a novel approach to visualization and integration of medical information", COMPUTER ASSISTED SURGERY, XX, XX, vol. 4, 1999, pages 129 - 143, XP002280194 *
ROBB R A: "Visualization in biomedical computing", PARALLEL COMPUTING, ELSEVIER PUBLISHERS, AMSTERDAM, NL, vol. 25, no. 13-14, December 1999 (1999-12-01), pages 2067 - 2110, XP004363672, ISSN: 0167-8191 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7629986B2 (en) 2003-11-05 2009-12-08 Bbn Technologies Corp. Motion-based visualization
EP1791092A2 (en) * 2005-11-23 2007-05-30 General Electric Company System and method for radiology decision support using picture-in-picture
EP1791092A3 (en) * 2005-11-23 2011-06-15 General Electric Company System and method for radiology decision support using picture-in-picture
EP1992289A1 (en) * 2006-03-09 2008-11-19 Imagnosis Inc. Medical 3-dimensional image display control program and medical 3-dimensional image display method
EP1992289A4 (en) * 2006-03-09 2010-05-12 Imagnosis Inc Medical 3-dimensional image display control program and medical 3-dimensional image display method
US8270697B2 (en) 2006-03-09 2012-09-18 Imagnosis Inc. Medical 3-dimensional image display control program and medical 3-dimensional image display method
JP5312932B2 (en) * 2006-03-09 2013-10-09 イマグノーシス株式会社 Medical three-dimensional image display control program and medical three-dimensional image display method
US9058679B2 (en) 2007-09-26 2015-06-16 Koninklijke Philips N.V. Visualization of anatomical data
WO2009108179A2 (en) * 2007-12-20 2009-09-03 Bbn Technologies Corp. Motion-based visualization
WO2009108179A3 (en) * 2007-12-20 2009-10-22 Bbn Technologies Corp. Motion-based visualization
US20090307628A1 (en) * 2008-06-09 2009-12-10 Metala Michael J Non-Destructive Examination Data Visualization and Analysis
US9177371B2 (en) * 2008-06-09 2015-11-03 Siemens Energy, Inc. Non-destructive examination data visualization and analysis
US8941680B2 (en) 2008-07-09 2015-01-27 Raytheon Bbn Technologies Corp. Volumetric image motion-based visualization
US8818059B2 (en) 2008-12-23 2014-08-26 Tomtec Imaging Systems Gmbh Method and device for navigation in a multi-dimensional image data set
WO2010072521A1 (en) * 2008-12-23 2010-07-01 Tomtec Imaging Systems Gmbh Method and device for navigation in a multi-dimensional image data set
US10692213B2 (en) 2015-03-10 2020-06-23 Koninklijke Philips N.V. Retrieval of corresponding structures in pairs of medical images
TWI624243B (en) * 2016-12-15 2018-05-21 神農資訊股份有限公司 Surgical navigation system and instrument guiding method thereof
WO2018178274A1 (en) * 2017-03-29 2018-10-04 Koninklijke Philips N.V. Embedded virtual light source in 3d volume linked to mpr view crosshairs
JP2020512137A (en) * 2017-03-29 2020-04-23 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Virtual light source embedded in the 3D volume and connected to the crosshairs in the MPR diagram
US10991149B2 (en) 2017-03-29 2021-04-27 Koninklijke Philips N.V. Embedded virtual light source in 3D volume linked to MPR view crosshairs
WO2018222471A1 (en) * 2017-05-31 2018-12-06 General Electric Company Systems and methods for displaying intersections on ultrasound images
US10499879B2 (en) 2017-05-31 2019-12-10 General Electric Company Systems and methods for displaying intersections on ultrasound images
JP2019158447A (en) * 2018-03-09 2019-09-19 東芝Itコントロールシステム株式会社 CT imaging device
JP7038576B2 (en) 2018-03-09 2022-03-18 東芝Itコントロールシステム株式会社 CT imaging device

Similar Documents

Publication Publication Date Title
US5898793A (en) System and method for surface rendering of internal structures within the interior of a solid object
US6801643B2 (en) Anatomical visualization system
Grigoryan et al. Point-based probabilistic surfaces to show surface uncertainty
US4729098A (en) System and method employing nonlinear interpolation for the display of surface structures contained within the interior region of a solid body
US8497861B2 (en) Method for direct volumetric rendering of deformable bricked volumes
EP2486548B1 (en) Interactive selection of a volume of interest in an image
US7924279B2 (en) Protocol-based volume visualization
EP0204225B1 (en) System and method for the display of surface structures contained within the interior region of a solid body
US20090322748A1 (en) Methods,systems, and computer program products for GPU-based point radiation for interactive volume sculpting and segmentation
US20050237336A1 (en) Method and system for multi-object volumetric data visualization
US7576740B2 (en) Method of volume visualization
US7889894B2 (en) Method of navigation in three-dimensional image data
WO2004095378A1 (en) Combined 3d and 2d views
EP0836729B1 (en) Anatomical visualization system
JP5122650B2 (en) Path neighborhood rendering
Brecheisen et al. Flexible GPU-based multi-volume ray-casting.
Jainek et al. Illustrative hybrid visualization and exploration of anatomical and functional brain data
JPH0697466B2 (en) Device and method for displaying a two-dimensional image of an internal surface within an object
Tory et al. Visualization of time-varying MRI data for MS lesion analysis
Beard et al. Interacting with image hierarchies for fast and accurate object segmentation
Montilla et al. Computer assisted planning using dependent texture mapping and multiple rendering projections in medical applications
Barrett et al. A low-cost PC-based image workstation for dynamic interactive display of three-dimensional anatomy
Camp et al. A system for interactive volume analysis (SIVA) of 4-D biomedical images
Demiris et al. 3-D visualization in medicine: an overview
JP2022551060A (en) Computer-implemented method and system for navigation and display of 3D image data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase