CA2317336A1 - Occlusion resolution operators for three-dimensional detail-in-context - Google Patents

Occlusion resolution operators for three-dimensional detail-in-context Download PDF

Info

Publication number
CA2317336A1
CA2317336A1 CA002317336A CA2317336A CA2317336A1 CA 2317336 A1 CA2317336 A1 CA 2317336A1 CA 002317336 A CA002317336 A CA 002317336A CA 2317336 A CA2317336 A CA 2317336A CA 2317336 A1 CA2317336 A1 CA 2317336A1
Authority
CA
Canada
Prior art keywords
ort
information
data
viewpoint
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002317336A
Other languages
French (fr)
Inventor
David Cowperthwaite
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Idelix Software Inc
Original Assignee
Idelix Software Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Idelix Software Inc filed Critical Idelix Software Inc
Priority to CA002317336A priority Critical patent/CA2317336A1/en
Priority to PCT/CA2001/001256 priority patent/WO2002021437A2/en
Priority to AU2001289450A priority patent/AU2001289450A1/en
Priority to EP01969103A priority patent/EP1316071A2/en
Priority to CA002421378A priority patent/CA2421378A1/en
Priority to JP2002525572A priority patent/JP4774187B2/en
Priority to US09/946,806 priority patent/US6798412B2/en
Publication of CA2317336A1 publication Critical patent/CA2317336A1/en
Priority to US10/884,978 priority patent/US7280105B2/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

The information explosion is changing the daily lives of the "wired"
population.
Increasing numbers of individuals are being empowered to act as their own infor-mation brokers, interacting with large and expanding data spaces, for example the World Wide Web. Scientists too are dealing with growing databases of empirical and simulated information. New tools for visualizing these spaces are being developed which incorporate three-dimensional visual representations of information.
These three-dimensional representations are believed to leverage the individual's capacity for comprehension and navigation in our three-dimensional world. In practice one is faced with the inherent limitations of 2D presentation and interaction through the traditional two-dimensional desktop computer display.
The spatial limitations of the two-dimensional display (referred to as the "screen real-estate problem") have motivated the development of detail-in-context methods of information presentation and exploration. Much of the work in this field has concen-trated on presentation methods for 2D information spaces. While a few techniques have incorporated 3D interaction metaphors, such as surfaces which produce mag-nification through perspective distortion, fewer still have focused on techniques for interaction with 3D representations of information. Three-dimensional representa-tions of information present specific challenges not found in 2D
representations, for example the effect of occlusion on the visibility of elements. Traditional approaches to dealing with occlusion in three-dimensional representations include techniques such as cutting planes, viewer navigation, filtering of information and transparency. While these methods provide clearer visual access to elements of interest it is often at the expense of removing much of the contextual information from a representation.

Description

We present a technique which employs a new approach to some of the challenges in interacting with 3D representations of information. Specifically we resolve occlusion of objects in a 3D scene through a layout adjustment algorithm derived from 2D
detail-in-context viewing methods. Our extension beyond traditional 2D
approaches to layout adjustment in 3D accounts for the specific challenges of occlusion in 3D
representations, where other such extensions do not. In doing so we provide a simple yet powerful tool for providing non-occluded views of objects or regions of interest in a 3D information representation with minimal adjustment of the original structure and without the use of cutting planes, transparency or information filtering.
VVe call these operators Occlusion Rxducing 'I~ansformations (ORTs).
iv Contents Approval ii Abstract iii Dedication v Cauotation vi List of Tables x List of Figures xii 1 Introduction 1 1.1 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 2 1.2 Three-Dimensional Information Graphics . . . . . . . . . . . . . . . . 3 1.3 Cost of 3D Representations . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Detail-In-Context . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 6 1.5 Thesis Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 8
2 Related Work 9 2.1 3D Perceptual Cues . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2 Three-Dimensional Visualization . . . . . . . . . . . . . . . . . . . . .

2.2.1 Scientific Visualization . . . . . . . . . . . . . . . . . . . . . . 13 2.2.2 Information Visualization . . . . . . . . . . . . . . . . . . . . 17 vii 2.3 Structural Framework for Visualization. . . . . .
Design . . . . . . . 21 2.4 Methods for Occlusion Reduction . . . . . . . .
. . . . . . . . . . . . 22 2.4.1 Navigation . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 22 2.4.2 Partial Transparency . . . . . . . . . .
. . . . . . . . . . . . . 24 2.4.3 Culling . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 25 2.5 3D Deformation Methods . . . . . . . . . . . .
. . . . . . . . . . . . . 27 2.5.1 Space Deformation Operators . . . . . . .
. . . . . . . . . . . 28 2.5.2 Zoom Illustrator . . . . . . . . . . . .
. . . . . . . . . . . . . 29 2.5.3 Page Avoidance . . . . . . . . . . . . .
. . . . . . . . . . . . . 30 2.6 Detail-in-Context Viewing . . . . . . . . . . .
. . . . . . . . . . . . . 31 2.6.1 PPTSpeCtlve-Based F1S11eyeS for . . . . . .
2D layouts . . . . . 31
3 44 Method 3.1 Layout Adjustment in 2D . . . . . . . . . . . .
. . . . . . . . . . . . 45 3.2 Occlusion and the Sight-line . . . . . . . . .
. . . . . . . . . . . . . . 45 3.2.1 Towards a Solution . . . . . . . . . . .
. . . . . . . . . . . . . 47 3.2.2 Redefining the Focus . . . . . . . . . .
. . . . . . . . . . . . . 49 3.2.3 ORT-Relative Coordinate Systems . . . . . .
. . . . . . . . . 53 3.3 Distortion Space . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 55
4 62 Applications 4.1 Discrete Data Representations . . . . . . . . .
. . . . . . . . . . . . . 63 4.1.1 Regular 3D Graphs Structures . . . . . .
. . . . . . . . . . . . 64 4.1.2 General 3D Node and Edge Structures. . . . . .
. . . . . . . 67 4.1.3 Hierarchical 3D Graph Structures. . . . . .
. . . . . . . . . . 72 4.1.4 3D Desktop Environment . . . . . . . . .
. . . . . . . . . . . 74 4.2 Contiguous Data Representations . . . . . . . .
. . . . . . . . . . . . 78 4.2.1 3D Models . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 80 4.2.2 Isosurface Set Data . . . . . . . . . . .
. . . . . . . . . . . . . 86 4.3 Continuous Data Representations . . . . . . . .
. . . . . . . . . . . . 89 4.3.1 Fast-Splat Rendering . . . . . . . . . .
. . . . . . . . . . . . . 93 viii 4.3.2 3D Texture-Based Rendering . . . . . . . . . . . . . . . . . . 97 4.3.3 Temporally Sequential 2D Information . . . . . . . . . . . . . 113 4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 116
5 Conclusion 118 5.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 119 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 119 5.3 Final Thought . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 120 A 3D Perception 121 A.l Perceptual Cues . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 121 B Marching Cubes 129 Bibliography 133 ix List of Tables 2.1 A visual comparison of a range of magnification functions (Constant f (x) = l, T.inear f (x) = 1 - x, Gaussian f (x) = a T ; Hemisphere f (x) = sin(cosh(1 - x)), Cosine f (x) = cos(x * Z ), Tangent f (x) _ 1 - tan(0.95*x~2) and Inverse Hemisphere f (x) = 1 - sin(cosh(1 - x))) tan(0.95* 2 and their properties of slope ~, apparent planar translation t,(x) and resulting magnification m(x) (within a perspective-distortion system such as 3DPS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1 Tllustration of the application of four common 2D detail-in-context lay-out adjustment approaches to 3D layout via simple inclusion of 3''d di-mension. In the first row are examples of step and non-linear orthogonal stretching, non-linear radial displacement and non-space-filling orthog-onal stretching. Row two illustrates the effect of moving from (x, y) to (x; y, z) for data and displacement function. The third row shows the effect of the layout function without the accompanying magnification of nodes. R.ow four shows the displacement only effect extended into three dimensions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 The effect of adding the effect of an ORT to the 3D extensions of some common 2D layout adjustment schemes for detail-in-context viewing.
In the first row of images we see the simple extension of the approaches to 3D, the central focal point is even more occluded than before the layout adjustment in most cases. The second row adds the operation of an ORT to clear the line of sight from the viewpoint to the focal point. 53 x 3.3 The Super(~uadric distance metric allows separate specification of the ns and ew shaping parameters to achieve a wider range of possible metric spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.4 Parameters available in the definition of ORT operators. . . . . . . . 59 3.5 Some of the space of ORT specifications possible by varying the source and distribution of the operator. The left column illustrates ORTs defined relative to the z-axis of the ORT CS, the right column illustrates ORTs defined relative to the y = 0 and x = 0 plane of the ORT CS. . 60 xi List of Figures 2.1 Image-order volume rendering proceeds across scan-lines, tracing the path of rays through the volume and performing shading calculations at regular intervals or at intersections with the data grid. . . . . . . . 13 2.2 Data and normal values must be interpolated from cell vertices to points within a cell. Three linear interpolations are used to accomplish this;
along edges, then across faces, then through the cell. . . . . . . . . . 14 2.3 Object-order volume rendering methods traverse the data set and deter-mine the contribution of each element to the final image. Object-order methods may operate with front to back ( Under operator) or back to front (Over operator) composition. . . . . . . . . . . . . . . . . . . . 15 2.4 Structural classes of three-dimensional representations . . . . . . . . .

2.5 Moving the viewpoint makes the highlighted hone (the first metatarsal) occluded in (a) visible in (h). . . . . . . . . . . . . . . . . . . . . . . 23 2.6 Movement of the viewpoint into the structure puts elements that oc-cluded the node of interest in (a) behind the viewer in (b). . . . . . . 24 2.7 Partial Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.8 Cutting planes and regions remove volumetric data in a half space (a) or sub-volume (b) from the final image and make previously occluded surfaces visible adjacent to the cut. . . . . . . . . . . . . . . . . . . . 26 2.9 Selective removal of component groups improves visibility of remaining components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 xii 2.10 The effect of a warp operator on the path of a ray through a scene. The deflected ray results in the appearance of a deformation of the surface.
(After [61]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.11 Discontinuous ray deflectors operate by deflecting rays in opposite di-rections from opposite sides of a plane. Ray sampling is restricted to the original slde Of the plane, thereby producing a cutting and retract-ing form of distortion. (After [63]) . . . . . . . . . . . . . . . . . . . .

2.12 The base configuration of the Perspective Wall with no regions of in-terest (a). The surface has one dominant linear dimension and is at a constant depth in z in the perspective viewing frustum (b). . . . . . . 32 2.13 Perspective Wall with a single ROI specified in the middle of the field of view, generating a region of increased scale and surrounding distorted regions (a). The ROI is at a constant depth in z with respect to the viewpoint in the perspective viewing frustum (b). . . . . . . . . . . . 33 2.14 The features of the perspective viewing frustum. The frustum forms a pyramid with the viewpoint at the apex. The far-plane forms the base of the pyramid. The width and height of the pyramid are usually defined by the field of view and the aspect ratio. The field of view is the angular horizontal width of the pyramid, and the aspect ratio defines the relatIOTlshlp betWPen the width and the height. (a = ~~dth ) height Objects within the pyramid are visible in perspective projection if they are located between the near and far planes in depth. The central axis of the pyramid defines the direction of the view in world coordinates. 34 2.15 Geometry of the frustum is sheared in x to keep the Viewpoint directly over the offcenter ROI. . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.16 Effect of simple vertical movement of a portion of the surface in an off center lens after perspective projection (a). The reason is that the surface now extends outside of the perspective viewing frustum (b). . 36 2.17 Shearing the distorted region so that it is oriented towards the view-point (a) brings the entire extent of the lens back into the projected image(b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 xiii 2.18 Shearing the lenses rather than the viewing frustum allows for the specification of multiple R.OI (a). The area of intersection of lenses must be blended to provide a smooth transition between the two shearing directions (b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.19 The Gaussian curve f (x) _ ~-ro.oxz used in Three-Dimensional Pli-able Surfaces to provide smooth integration of the ROIs and original information layout. . . . . . . . . . . . . . . . . . . . . . . . . . . - . 39 2.20 After perspective projection, the apparent transformation t(x) of points on a surface transformed by the application of a gaussian lens with a maximum height of 1 and a viewpoint distance of 2 from the original surface plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

-10.0x2 2.21 The displaced position of points ~I(x) = x+ z ee-1~ as a reslllt of the gaussian lens after perspective projection. . . . . . . . . . . . . . . . 40 2.22 The magnification (with single-point perspective projection) and com-pression distribution as a result of the gaussian lens. rra(x) = d~ . . 41 2.23 The progression of Lp distance metrics from L1 (figure 2.23(a)) to L200 (figure 2.23(d)) in two dimensions. . . . . . . . . . . . . . . . . . . . 41 3.1 Operation of linear ORT in cross-section. Focal point and viewpoint, define the line of sight through the structure (a). Distance of other elements to line of sight determines direction of displacement (b). The length of vectors in (b) will form the input into the function which determines the magnitude of the resulting displacement vectors. Final transformed layout produces clear line of sight from viewpoint to focal point (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.2 Increasing degree of application of two ORTs to reveal two objects of interest (highlighted here with darker color) in a 3D graph layout. . . 51 3.3 Rotation of the 3D graph to illustrate the occlusion of two objects of interest (nodes highlighted with darker color). A clear view of even the nearer of the two in the structure is available from only a limited range of viewpoints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 xiv 3.4 The same three viewpoints and same two objects of interest (now high-lighted and increased in scale for emphasis) with the application of ORTs. Fven the node at the far side of the graph is visible through the sight-line-clearing effect of the ORT. . . . . . . . . . . . . . . . . 52 3.5 Annotated framework of diagrams illustrating the relative shape of a selection of ORT functions. On the left (a) the ORT Coordinate System (CS) z-axis is aligned with the World CS z-axis, on the right (b) the camera position (VRP) has been moved and the ORT CS is re-oriented to track the change. . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6 Schematic of orthogonal stretch ORT. Distance of points is measured to nearest of the 3 planes passing through the focal point. . . . . . . 56 3.7 Linear extrusion through z axis of the functions describing the oper-ation of a detail-in-context layout adjustment scheme. The Gaussian curve f (~) = cwo.oxz forms the basis. . . . . . . . . . . . . . . . . . . 56 3.8 The same graphs now illustrating the effect of linearly scaling the ap-plication of the basis function according to depth in z. . . . . . . . . 57 3.9 A secondary shaping function applied to the horizontal plane-relative ORT. Scaling in z is constant but the addition of the shaping cunle can be used to constrain the extent of the plane-relative function in x. 57 3.10 Distance measurement according to the Lp metric in three-dimensions. 58 4.1 Three classes of three-dimensional data representations . . . . . . . . 62 4.2 The original layout of the 9 x 9 x 9 3D grid-graph . . . . . . . . . . .

4.3 The orthogonal stretch algorithm aligned to the principle planes of the data layout space (a) and aligned to the viewer as an ORT operator (b). 65 4.4 The 3D grid-graph with the central node specified as the object of interest. An ORT has been applied to reduce occlusion. Color of the remaining nodes in the graph represent the degree to which they have been displaced by the ORT. The darkest nodes have been moved the most. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xv 4.5 Examples of constant and linear scaling of the application of the ORT
along the z axis of the ORT coordinate system. The constant scaling Isolates the object of lnterest against an empty background while the linear scaling looks very similar to the line segment relative application. 67 4.6 ORT functions applied relative to a horizontal plane through the ot>-_)eCt Of lntereSt. ObJeCtS Wlthln the plane remain In plane while those above and below are displaced. In (a) the operator is data-axis relative, and does not track changes in the viewpoint. The operator in (b) is viewpoint aligned. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.7 Caffeine Molecule: CgHIOlV40z . . . . . . . . . . . . . . . . . . . . . .

4.8 Movement of the viewpoint around the caffeine molecule without the application of any ORT functions. . . . . . . . . . . . . . . . . . . . . 69 4.9 The oxygen atom indicated in (a) is selected as the atom of interest for a linear-source ORT. The same movement of the viewpoint is performed around the caffeine molecule and this atom remains visible as other atoms are deflected away from sight-line. . . . . . . . . . . . . . . . 7(1 4.10 Sequence illustrating the application of a linear-source ORT to the structure of vitamin B12. The Oxygen atom selected as an atom of interest is in the region indicated by the overlay box. . . . . . . . . . 71 4.11 A detail view of the region indicated by the overlay box in the previous figure. The result of the successive application of a linear-source ORT
to the (initially hidden) Oxygen atom is illustrated. . . . . . . . . . . 71 4.12 A selected leaf node in a cone tree layout of a directory structure is indicated by the overlay in (a). This node is brought to the front through concentric rotations of the cone tree structure; (b) through (d) 72 4.13 Two leaf nodes, labelled a and b in (a) are selected simultaneously.
Application of two ORT operators improves the visibility of these nodes without explicitly rotating one or the other to the front; (b) and (c). . 73 4.14 Once ORT operators are attached to nodes a and b, in (a), these nodes remain visible during movement of the viewpoint; (b) and (c). . . . . 74 xvl 4.15 The area of influence and viewpoint alignment of the ORT operators in the previous sequence, as seen from a secondary viewpoint. The OR:T operators remain aligned to the primary viewpoint as it is moved around the cone tree. . . . . . . . . . . . . . . 74 . . . . . . . . . . . . .

4.16 3D Desktop environment . . . . . . . . . . . 75 . . . . . . . . . . . . . .

4.17 AS the selected WIndOW Is pushed back through a cluster of Windows in the 3D desktop environment the cluster is dispersed in order to prevent occlusion of the selected window. . . . . . . . . 76 . . . . . . . . . . . .

4.18 Annotated images from the previous sequence illustrating the initial position (boxes) and movement (arrows) of the selected (solid line) and other (broken line) windows. . . . . . . . . 77 . . . . . . . . . . . .

4.19 As the SeleC;ted WIndOW 1S mOVed from 1tS lnltlal pOSltlOn In the upper left of the view the cluster of other Windows which it passes in front of arc dispersed by the action of the ORT attached to 78 the selected window.

4.20 Annotation of two frames from the previous sequence.
As the selected window moves from position a to position b the remaining windows are deflected by the action of the ORT. The arrow chlsters In (a) indicate the progression of deflection vectors for the remaining windows. Early to late vectors in the resulting motion arc shaded from dark to lighter grey. On the Tlght (b) illustrates the state Of the layout at the midpoint of the sequence. Initial (grey boxes) and final (black boxes) positions of the windows are indicated as well as their resulting displacements (arrows) . . . . . . . . . . . . . . . . . . . . 79 . . . . . . . . . . . . . .

4.21 The skeletal model of the foot used in the following example. This model contains 26 separate components and 4204 triangular81 faces. . .

4.22 The external cuneiform bone (circled in (a) and highlighted in all im-ages) is selected as the focus and an ORT operator is used to displace the remaining 25 bones away from the sight-line. 82 . . . . . . . . . . .

4.23 Again the external cuneiform is the object of interest and remains vis-ible in this sequence as the viewpoint moves around 82 the model. . . . .

xvii 4.24Again the external cuneiform is the object of interest and remains vis-ible in this sequence as the viewpoint moves 83 around the model. . . . .

4.251n (a) no scaling is applied, the effect of the ORT is simply to displace components and reduce occlusion. In (b) we have subsequently scaled components according to their geometric distance from the object of interest, the external cuneiform bone. . . . 84 . . . . . . . . . . . . . .

4.26Figure (a) illustrates the basic configuration of the perspective viewing volume and 3D model. Spheres indicate the location of the viewpoint, the view reference point and the point midway between. Components of the model are translated along their individual lines of sight in (b) to produce magnification via perspective projection.85 . . . . . . . . . .

4.27The effect of decreasing the distance d from the viewpoint on projected scale in perspective projection. Final scale varies as the inverse of the change in distance. . . . . . . . . . . . . . 86 . . . . . . . . . . . . . . .

4.28Side (a) and front (b) views of the foot model with the navicular bone selected as an object of interest and highlighted.
No distortion or mag-nification has been applied and the bone remains all but completely occluded in these two views. . . . . . . . . 86 . . . . . . . . . . . . . . .

4.29Sidc (a) and front (b) views of the foot model with the navicular bone Selected as an object of interest and highlighted.
Distortion only has been applied to the layout of the model, with 87 no scaling for emphasis.

4.30The navicular bone is selected as an object of interest and an ORT is applied to reduce occlusion. Simultaneously a small degree of magnifi-cation has been applied to emphasise the navicular bone and its neigh-borhood. Magnification here is produced through perspective transfor-mation and as a result the navicular is rendered in front of other bones that may have still resulted in partial occlusion. . . . . . . . . . . . . 88 xviii 4.31 The same two views of the human foot with the navicular bone as an object of interest in the layout. Here magnification is produced via in-place scaling of the individual components. The most apparent different is that in (b) the interior cuneiform bone now partially occludes the navicular. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.32 A detail view of the area ,just in front of the navicular bone with in-place scaling {a) and perspective scaling (b). The intersection of the external cuneiform and the third metatarsal in (a) is resolved in (b) by the relative displacement of the components in depth. . . . . . . . . . 90 4.33 Example source images for the generation of Marching Cubes derived surfaces of MRI data. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.34 Three Separate SIITfaCPS from diagnostic MRI data of Multiple-Sclerosis (MS) lesions. Proton-Density layers (a) reveal outer surfaces such as the skin, T2 layers (b) reveal neural tissue {brain and eyes), while the lesion mask (c) indicates location of MS lesions. These three data sets are used in the demonstration of the application of an ORT to volumetric data visualization. . . . . . . . . . . . . . . . . . . . . . . 91 4.35 Composite 4.35 is rendered as slightly transparent in order to make spatial organization apparent. . . . . . . . . . . . . . . . . . . . . . . 92 4.36 Sequence illustrating the application of an ORT to isosurface data. The lesion mask layer (green) is not affected by the scaled and truncated planar deformation and is revealed as the outer layers are cut and pushed back. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.37 UNC head data set rendered via fast-splatting . . . . . . . . . . . . .

4.38 The application of a vertical-plane-source ORT to CT data of a human skull Tendered Vla fast splatting. Observe the IrlCTease In brightness at the edge of the ORT-induced split. This is the result of splat primitives overlapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xix 4.39 A horizontal-plane ORT applied to the UNC Head data set. In (a) the ORT is aligned to the viewpoint. In (b) we have moved the viewpoint independent of the ORT (disabled automatic tracking of the viewpoint) in order to illustrate the linear scaling of the application of the ORT in view-aligned depth. The ORT is scaled in depth from the front of the representation to the depth of the region of interest. . . . . . . . . . . 96 4.40 The same data set and orientation of views. Here a shaping curve has been added to control the extent of the ORT operator across the horizontal plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.41 The Visible Human Female data set with a plane-relative ORT applied.
Here the ORT scaled in depth from the front to back of the data set, rather than from the front to the region of interest. . . . . . . . . . . 98 4.42 Relation of slice domain to volume data domain. . . . . . . . . . . . .

4.43 Two basic approaches to the alignment of slices in 3D-Texture hardware accelerated volume rendering. . . . . . . . . . . . . . . . . . . . . . . 99 -10.0x2-10.0 y2 4.44 2D Gaussian Function f (~) = a . . . . . . . . . . . . . . 1(11 4.45 Hessian of 2D Gaussian Function f (~;) = e-lo.o~z-io.oy2 , , . _ , , , .

4.46 Anisotropic mesh aligned to Hessian of Gaussian function. . . . . . . 102 4.47 Sampling planes aligned to data space axis (a) or centered on sight-line (b)..................................... 102 4.48 Configuration of tessellated plane and hidden texture surface used in demonstrating stretch approach to ORT application. . . . . . . . . . 103 4.49 Progressive application of deformation and resulting transparency ef feet. As triangles are stretched they are made progressively less opaque.
The result is that in the area of the deformation the background layer becomes visible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.50 Detail view illustrating the transition of opacity values at the boundary of the deformation which results in the blurry appearance. . . . . . . 104 xx 4.51The initial configuration of the slice sampling mesh. Triangulation density is increased in the inside corner where ORT displacements will occur. This minimizes the extent of linear interpolation of texture coordinates. . . . . . . . . . . . . . . . . 105 . . . . . . . . . . . . . . . .

4.52Introduction of a semi-circular deformation of the texture sampling mesh by deforming vertices along the y axis. 106 . . . . . . . . . . . . . .

4.53Mirroring the single deformed texture sample plane allows the creation of a closed empty region in the middle of the 106 plane. . . . . . . . . . .

4.54OpenGL clipping planes are used to trim the texture planes to the boundaries of the volume presentation space . 107 . . . . . . . . . . . . .

4.55Progressive application of ORT to produce a horizontal, shaped, open-ing in a single plane in a volumetric representation.108 . . . . . . . . . .

4.56Progressive application of ORT to produce a vertical, shaped, opening in a single plane in a volumetric representation.108 . . . . . . . . . . . .

4.57Increasing the width of the shaping function to enlarge the horizontal OR:T in a single slice of a volumetric data set.109 . . . . . . . . . . . . .

4.58Texture transformation matrix is manipulated so that, as the intersec-tion of the sampling planes is moved across the presentation space the texture space remains stationary. . . . . . . 109 . . . . . . . . . . . . . .

4.59The VlSlble Human Male data Set rendered Vla 11~
3D-teXtllre Shclng. . .

4.60The application of a horizontal ORT to the Visible Human Male data set. The point of interest is behind the left eye and the effect of the ORT is to reveal two cut-surfaces aligned to the viewpoint without the removal of data. . . . . . . . . . . . . . . 111 . . . . . . . . . . . . . . . .

4.61 A more centrally located point of interest is specified in the Visible Human Male data set and the viewpoint. is moved around the head from the front to the left side. . . . . . . . . . . . . . . . . . . . . . 111 4.62 The UNC Head CT data set with vertically and horizontally aligned OR:T functions applied to reveal cut surfaces aligned to the current viewpoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxi 4.63 Arrangement of spatio-temporal data as a 3-dimensional cube by using a spatial axis to represent time. . . . . . . . . . . . . . . . . . . . . .

4.64 A block of spatio-temporal landscape data and an ORT operator ap-plied to reveal the state of the landscape at an instant in time. . . . . 114 4.65 Positioning a split in a data-cube (left), applying an ORT operator to reveal two internal faces (middle left), repositioning the viewpoint to obtain a more perpendicular view of the right face (middle right) and finally selecting a new point in at which to position the split. . . . . . 114 4.66 Operation of the book mode ORT with the hardcover appearance. . . 115 A.1Wireframe images with no depth information . . 123 . . . . . . . . . . . .

A.2Image containing several depth cues. . . . . . 123 . . . . . . . . . . . . .

A.3Perspective Illusion . . . . . . . . . . . . . 124 . . . . . . . . . . . . . . .

A.4Stereo Viewing . . . . . . . . . . . . . . . . 125 . . . . . . . . . . . . . .

A.5The simulated view from the left and right eye, including depth of field and perspective foreshortening effects. . . . 126 . . . . . . . . . . . . . . .

A.6Texture gradient effect . . . . . . . . . . . 126 . . . . . . . . . . . . . . .

A.7Stereo Pair . . . . . . . . . . . . . . . . . 127 . . . . . . . . . . . . . . .

A.8Floating region . . . . . . . . . . . . . . . 128 . . . . . . . . . . . . . . .

B.1 A simple equipotential surface through an implicit model. . . . . . . .

B.2 UNC Hcad CT data set rendered as an isosurfacc. . . . . . . . . . . . 130 B.3 The United States National Library of Medicine Visible Human Project data sets. The male B.3(a) and female B.3(b) data sets are derived from axial slices of the visible data. This data obtained from the NPAC/OLDA online data source. . . . . . . . . . . . . . . . . . . . . 130 B.4 The 15 basic cases of edge crossings in the Marching Cubes algorithm. 131 xxii Chapter 1 Introduction By now, the "information explosion" is not new to anyone involved in information sciences. It is a part of the everyday experience of those for whom the Internet has become integral to their daily lives. dews, communications, consumer information and entertainment, business and financial transactions all form the information space made available through the interface of our personal computers (PCs) via the world wide web (VVWW) (7J. The enormous number of sources and forms of data has put a serious strain on the capacity of individuals for finding and utilizing relevant information.
Another result of the direct connection of PCs to the WWW is that it has empow-Bred users to act as their own information brokers. Traditionally when investigating a topic we would approach a domain expert, describe our situation and allow them to asslst us in obtaining the relevant information. This has been true in market research, financial analysis and planning, as well as in consumer-oriented tasks such as product research and travel planning. With the VVZVW we arc told that this information is all a "click away" from our desktop PC. What the User eXperlenCPS 1S a torrent of infor-mation available on a subject delivered to them through increasingly high-bandwidth connections to the WWW. What users are missing, however, is filtering, analysis and presentation provided by the domain expert information broker.
Information overload is not merely a problem for those using the WW~'~'. Busi-nesses and scientific institutions are amassing huge databases of information for study CHAPTER 1. INTRODUCTION
and analysis. The rate of accumulation of such data and the increasingly complex and abstract information space it represents drives the demand for the development of new tools for storing, processing and presenting this data in order to make it useful and meaningful.
1.1 Visualization Visualization in computing is the process of applying computer graphics techniques to the problem of presenting information to users. Often visualization is divided into two broad fields: scientific visualization and information visualization. Each of these fields may be further subdivided into more specific domains; for example scientific visualization encompasses volume visualization, flow visualization, medical visualiza-tion etc. Scientific visualization generally involves data which possesses a mapping to some real physical space. That is to say that it often deals with information which is the product of measurement or simulation of a real-world process. This presents a natural mapping of the resulting data to a visual, structural representation, that is closely related to the source of the data. On the other hand information visualization may deal with much more abstract sources which have no such natural mapping and therefore require novel representations for the data.
Both fields of visualization are more generally concerned with the creation of vi-sual representations of information that will support understanding and analysis, and promote insight. This often means developing a visual representation or metaphor for a non-visual phenomenon. A graph is an excellent example of mapping the semantic connections of an information structure (a common example is links between pages on the W WW) to a visual structure. Such a structure is a visual formalism (45~. A
formalism must be learned to be understood, and it cannot be a.SSllmed t0 be universal as its meaning may differ by group or culture. For example, the symbolic meanings associated with specific colors vary between cultures. In western cultures white is a color which evokes images of purity, whereas in eastern cultures it is most often associated with mourning and loss. (11(l~. Other examples of visual representations CHAPTER 1. INTROD UCTIO N 3 of information include abstractions of numerical data in large tables and geographi-cal representations (encoding information as position, color or height on a surface).
There are of course many other examples of visualization techniques, an increasing number are presented each year, but they all have in common the transformation of information into a visual representation.
Visualization as a whole is not a new activity, neither is it a by-product of the information age and the development of the computer. Humans have been visualizing even abstract information for thousands of years. Ancient Egyptian tomb paintings of the universe, with the star-covered body of the sky-goddess lout arching over the earth separating the world from the chaos beyond, are visual representations of that culture's abstract model of the cosmos. The medical and technical illustrations of Leonardo DaVlnC1 are eXqlllSlte eXample8 Of early scientific visualization.
Leonhard Eider, a Swiss mathematician, developed the visual formalisms of Euler circles, which later evolved into Vcnn diagrams and graphs.
1.2 Three-Dimensional Information Graphics The evolution of information graphics from traditional print media, through film and finally, to the computer display has introduced a new dimension of interactivity to visualization tools. Animation and cinematography made dynamic information presentations possible before the advent of computer-generated graphics but they were generally non-interactive, play-only, as well as time-consuming to produce and modify.
Computer graphics is making use of interactive three-dimensional representations of data in visualization increasingly common. Although traditional media are still used to produce more concrete three-dimensional representations as well as 3D
pre-sentations through the construction of physical models, these can be more costly to produce and difficult to edit or reuse.
~aditional two-dimensional media may also present 3D data as 2D projections.

CHAPTER I. INTRODUCTION 4 The disciplines of drafting and technical illustration were built around the accu-rate representation of 3D objects as 2D projections, and perspective-correct two-dimensional images have been a part of our culture since the Renaissance.
Similarly, computer displays are still limited to ultimately producing a 2D presentation through the application of the synthetic camera an<l projection (36]. This is true for both dl8play dev lre.5' and mOSt Olltpllt devlreS, although there now exist some systems providing a limited level of true 3D display or output, including holographic dis-plays (107]' and rapid prototyping machines. Limiting our discussion to common display devices, there remain various levels of support which may be provided to aid in the perception of a truly 3D presentation.
Even with these limitations, there is a general trend towards the increased use of 3D visual representations within the field of information visualization (71].
The third spatial dimension may be employed here to generate a representation for the data that is more intuitive, attributing to it some of the characteristics of a real-world object (8, 15]. Information visualization systems such as Semnet (33], Data Mountain (88], Task Gallery (9(1), V~~eb Forager (17], Graph Visualizer 3D (115], all seek to use the spatial memory and navigation capabilities of users to increase the effectiveness of interaction with an information space. The third spatial dimension or coordinate may also be used to provide increased opportunities for the proximal presentation of information which has been shown to more effPCtive in tasks of information integration (119).
There is an underlying general perception that a 3D representation of information affords a much greater capacity for information presentation. The results of studies such as those by Franck and Ware (112] seem to indicate that modest increases in capacity of two to three times do result from the application of 3D
representations in some tasks, provided appropriate perceptual support of the 3D scene is incorporated.
' Combining LCDs and lenticular lenses to facilitate the simultaneous single-screen display of stereo image pairs which do not require the use of special glasses to view as stereo images.

CHAPTER. I . INTROD UCTIO N 5 1.3 Cost of 3D Representations There are a number Of COStS aSSOClated Wltb the uSe Of three-dIIIlenSlOnal TepreSenta-tions in visualization: first the additional computational complexity of storing a third dimension of data and rendering 3D scenes onto a 2D display, second the additional attention to perceptual cues associated with the display of the 3D scene when viewed on a 2D display, and third the possibility that some elements will be occluded by others.
In answer to the first point, the availability of increasingly powerful and economical graphics accelerators addresses a large part of the computational cost of producing and interacting with 3D visual representations. At the same time software which facilitates the creation of 3D visual representation is being developed to leverage the capabilities of the modern PC.
With respect to the second issue, we are able to draw on the understanding of the operation of human perceptual mechanisms from other fields such as cognitive psychology. In doing so we can tune our use of 3D representations to provide greater eXpreSSIVe pOVI'er. Works such as those Of Tufte ~1~3~ illustrate the potential use, and misuse, of these mechanisms.
As to the third cost, occlusion is one phenomenon at work in 3D visual represen-tations that is not present. in 2D representations where all information is restricted to a plane perpendicular to the viewer. The addition of the third spatial variable leads t0 the pOSSlblllty Of Ob.JectB InterpOBed ~hPlng pOS1t10I1Pd~ between the VIPI~~pOlnt and other objects in a scene, thus partially or completely hiding them from a particular view. The correct preservation of spatial relationships and presentation of occlusion relationships is important in constructing a scene with any degree of physical plau-sibility; the development of accurate visible surface determination algorithms was an active area relatively early on in the development of the field of computer graphics.
In using 3D representations in visualization, however, occlusion may work against us. For example, in volumetric rendering of 3D data it is often the case that the near-continuous nature of the data makes occlusion of interior features of the data inevitable. This phenomenon is important in supporting perception of the scene as CHAPTER 1. INTRODUCTION 6 a 3D representation, but one may very well wish to examine these hidden interior regions.
SO1t1t1on8 are aVa11ab1P t0 pTOVIdP Vlsllal aCCPSS (Clear lines of sight,) t0 previously occluded elements. Cutting planes may be used to remove information from a scene.
Increasing transparency (or reducing the opacity) of objects allows more distant ob-,jeCtS t0 he Seen through those more proximal to the viewer. Navigation of the viewer, whether egocentric (moving the viewer within the data space) or exocentric (moving or re-orientation of the data space) may lead to a configuration where occlusion is resolved. Finally, information filtering may be used to reduce the density of data in a representation. These are all common methods of occlusion resolution and all operate by reducing the amount (or visibility) of contextual information in the fi-nal presentation. Similar methods (such as panning zooming and filtering) have also heen traditionally applied to dealing with large or congested displays of information in two-dimensions.
1.4 Detail-In-Context The removal of information from a presentation has been one approach to dealing with the large information spaces. A second approach has been the development of detail-in-context presentation algorithms.
The field of detail-in-context. viewing is concerned with the generation of classes of information presentations where areas or items defined as focal regions are presented with an increased level of detail, without the removal of contextual information from the original presentation. Early work in this area includes the Bifocal Display of Spence and Apperly (99] and the Fisheye views of Furnas (39~. Each of these sys-tems sought to provide multiple regions of scale within a single presentation.
Furnas' subsequent Generalized Fisheye View (40~ incorporated the idea of a degree of inter-est (DOI) function. Given an object or region of interest (ROI) the DOI
function combined an a Priori importance value for the remaining elements of the represen-tation with their distance from the R,OI to determine their final importance.
This importance translated into the relative display size of the components. Thus regions CHAPTER 1. INTRODUCTION 7 of greatest interest were displayed at the largest size, providing more visual detail, and the scale of the surrounding context was adjusted to provide the space for the magnification of the 1;,01. This may have been a simple scale adjustment or it may also have involved symbolic replacement of elements where insufficient scale was avail-ahle for a meaningful representation. Furnas' work included studies that pointed to human information processing and organization operating in a manner that paral-leled the properties of a fisheye display. People tend to retain and recall detailed knowledge about specific elements of a domain and less detailed knowledge about the extents of the domain. He suggested that the evidence from his studies indicated that detail-in-context presentation methods may be an intuitive tool, leveraging this human capacity for information navigation and retrieval in a manner not so different from the way in which 3D representations leverage our spatial navigation and memory skills.
Recent detail-in-context research has concentrated on the construction of multi-scale presentation spaces for 2D information. Examples include: Perspective Wall (72), Document Lens (91), Continuous-Zoom (32, 4(, Rubber Sheet (94, 95), Non-linear Mag-nification (57), Pad++ (5), Hyperbolic Space (65), CATGraph (55), FocusT~ine (44) and our own work on Three-Dimensional Pliable Surfaces (3DPS) (19~. Work has also been conducted in the area of detail-in-context views for 3D information representa-tions including: Semnet (33), ConeTrees (92), 3DZoom (84), Visual Access (27) and Non-linear Magnification (59).
1.5 Thesis Statement A unique situation arises w ith detail-in-context viewing and 3D
representations of information, Wlth the pOSSlblhty Of the ROI being occluded. As we have seen, there are a number of methods which have been developed, independent of detail-in-context viewing, for dealing With situation of occlusion in 3D information representations.
Each of these methods involves removing information from a display in some manner, which is contrary to the goals of detail-in-context viewing. This thesis presents a new approach to occlusion resolution that provides detail-in-context viewing for 3D

CHAPTER 1. INTRODUCTION 8 information representations. It maintains the representation of the data while dealing with occlusion where it occurs, namely along the line of sight connecting the viewpoint to the R.OI.
1.6 Thesis Outline In chapter 2 we examine some of the most important aspects and mechanisms of human 3D perception as they apply to 3D visual representations of information in visualization. We will also explore in more detail the field of visualization;
specifi-cally the application of 3D visual representations in a number of systems.
Occlusion redll('tlOn meth0(IS Wlll be d1S(',uSSE',d and WP Wlll eXamlne the field of detail-in-context.
viewing for both 2D and 3D information spaces. The development of a novel viewer-aligned occlusion reducing transformation (ORT) operator, which seeks to integrate the benefits of detail-in-context viewing with the occlusion reduction capability nec-essary to deal with 3D information spaces is presented in Chapter 3. Chapter 4 will demonstrate the application of ORTs to a range of 3D visualizations. Finally in chapter 5 we discuss the potential for future work.

Chapter 2 Related Work In thlS theSlB WP are lntereSted In the preSentatlOn and perCeptlOn Of three-dlmenSlonal visual representations of information on two-dimensional computer displays.
There is an immense amount of research dedicated solely to the study of the psychophysical and cognitive issues involved in 3D perception and we hope only to provide a broad survey of some of the most relevant aspects of this field.
The 3D visual representations which interest us are particularly a product of the field of visualization. A number of classification schemes dealing with aspects of the visualization process have been proposed. These analyses have mainly dealt with the cognitive aspects of interaction with such representations and to a lesser degree the visual structures employed. We will formulate a basic classification scheme that sorts visual representations according to their spatial layout characteristics and use this classification will frame our discussion of 3D detail-in-context presentations.
We will begin with a brief examination of the mechanisms employed in the per-ception of 3D scents. Occlusion plays a significant role in our understanding of 3D
structures and we will see how it relates to the other perceptual cues we may find in an image.

CHAPTER 2. RELATED WORK 10 2.1 3D Perceptual Cues The aspects of an image or sequence of images from which we. derive information about the three dimensional structure of a scene are the depth cues within the image or images. These cues are broken down into two principle groups; primary and sec-ondary depth cues. The primary cues are those involving the operation of the human vision system as it interacts with the 3D world. These include binocular disparity, as well as convergence and accommodation. The secondary depth cues are also Called pictorial depth CllPS. TlleSP CLIPS are thOSP w hick we Can be found In a 2D
image and do not involve the physical state of the human visual system to derive depth informa-tion. Occlusion, motion parallax, kinetic depth effect, shading, shadows, perspective distortion, relative sizes of objects and changes in texture are all secondary depth cues. ~Ve have included a more detailed discussion of these perceptual mechanisms in Appendix A.
In producing images of 3D scenes in computer graphics we will apply some com-bination of thesC primary and secondary Cues in order to support the perception of a scene as three-dimensional. Immersive virtual environments (IVES) seek to produce a sense of the user truly being a part of the 3D world. To that end they will employ as many of the primary depth cues as possible, especially stereopsis and motion parallax with head-tracking. Images in such IVEs may be presented to users through the use of head-mounted displays or surround-screen projections. In head-mounted displays (HMDs) separate lmagPS may be prPSPntPd t0 PaCh Pye t0 eIfPCt StPTPOpSIS, but SlnCe the field Of VIeW In SllCh SYStemS is generally narrow a second alternative of a single, wider or wraparound, image is also an available configuration. These wider angle single image displays seek to incorporate more of the peripheral vision system, as is the goal of wrap-around projection systems such as the CAVE (30~. In each of these systems users are generally head-tracked in order to provide the correct motion cues;
in HMDs the orientation of the viewer (the direction in which the user is aiming the display) is also tracked so that the correct view is presented as the direction of view is rotated or tilted..
However, a problem arises in the use of stereo presentation in either of these sorts CHAPTER 2. RELATED WORK 11 of systems. Since images are always produced on a single Surface lacking the depth-of field of non-synthetic images, the cues of stereopsis, convergence and accommodation will produce conflicting TPSlIItS as OIIe'S eyes find only one distance at which to focus, but encounter varying depth from stereopsis. This inconsistency is a major issue in the fatiguing and disorienting nature of these environments.
NIOTe hmlted Vlrtllal real1t1e8 are deSCrlbed a8 fish tank VR (114 Or Desktop VR. ~89~. In these systems a more limited world is presented, typically a small volume extending into and protruding out of a desktop (rather than head-mounted or sur-round) display. In fish tank VR. images may still be displayed in stereo, and the head is tracked to produce appropriate motion parallax cues. Desktop VR, is the display of interactive 3D graphics on a desktop display without head tracking. DOOM
and similar games are examples of desktop VIt environments. In each secondary depth cues (perspective, shading, shadowing., texture, motion) play a strong role in inducing a sense of emotional immersion in a scene.
In most cases we are likely to find ourselves limited to the situation of Desktop VR., lacking head-tracking and stereoscopic images. This means that a sense of a 3D presentation space must be generated through the availability Of secondary depth cues. The lack of common access to primary depth cues in computer graphics is of relatively limited concern in most tasks; since it has been shown that w hen applied properly and in combination these secondary cues have a significant effect on the 3D perception of a scene and are only marginally improved upon by the addition of primary cues (49, 112.
Occlusion is the aspect of the display of information in three-dimensions with which we are most concerned here. The correct presentation of occlusion relation-ships, nearer elements in a presentation hiding those more distant from the viewpoint, is a basic function in the creation of realistic computer-generated graphics.
A consid-erable portion of the early work in the development of the field of computer graphics involved techniques to derive the correct presentation of visible surfaces in the final image. The correct presentation of occlusion is a powerful tool in supporting the per-ception of a scene in a 2D images as three-dimensional. Occlusion is such a powerful secondary depth cue that it will override principle depth cues such as stereopsis In CHAPTER 2. RELATED WORK 12 situations where the two cues are conflicting. Occlusion is also a challenge in the use of 3D computer-generated images in visualization. There is a sense in the field of visualization that the application of 3D representations will increase the information carrying capacity of a display, as well as leveraging the inherent human capacity for comprehension within a 3D world.
At the same time the effect of occlusion is to limit the part of a representation that is visible from any particular viewpoint. The work of Ware and ~anck in ~112~
examines the relative effectiveness of 3D displays, with levels of support for 3D per-ception from desktop VR, to fishtank VR., and finds an increase in performance with 3D representations, though not directly proportional to the perceived increase in the capacity of the information space. If we consider a 2D information space to have an n2 information capacity (where n is the width and height of the plane), then the naive expectation is that the addition of depth to the space will increase the capacity to ~3, a geometric increase in capacity. The work in (112 finds a rather more modest increase in efFectiveness of two to three times n2.
2.2 Three-Dimensional Visualization The field of 3D information visualization has produced numerous examples of rep-resentations for abstract information. Concurrently much of the work in scientific visualization has centered around refining techniques related to the presentation of volumetric data: such as surface extraction algorithms, direct volume rendering algo-rithms and flow visualization.
Numerous approaches have been taken to the construction of classification systems with which to organize this space of 3D information graphics. These have included examinations of the tasks related t0 the 118e Of the VISllal representations of data, studies of the structural properties of the resulting information representations and analyses of the cognitive aspects of the abstractions employed in information visual-ization systems. We will examine a selection of these classifications of visualization methods. We begin with a brief overview of thefields of scientific and information visualization.

CHAPTER 2. RELATED WORK 13 2.2.1 Scientific Visualization Scientific visualization is generally presented as a distinct sub-field, separate from information visualization. The physical or simulated source of data in scientific vi-sualization (especially in a subdomain such as medical visualization) often presents an appropriate visual representation for the data, precluding the process of choosing, or innovating, a new representation as we are often faced with doing in information VlSllallZatlon. ~P are mOSt lntereSted In meth0(1S fOr the presentation of 3D
represen-tations, most notably for the presentation of volume data.
One of the simplest and most familiar 3D visualization techniques in scientific visualization is are 3D surface plots. Surface plots are a simple extension of 2D plots of functions or data to 3D with the addition of a third layout axis.
Figure 2.1: Image-order volume rendering proceeds across scan-lines, tracing the path of rays through the volume and performing shading calculations at regular intervals or at intersections with the data grid.
More complex applications of visualization to scientific data include presentations of information from fluid-flow measurement or simulation. A variety of methods have been developed to aid in the visual representation of the paths of particles in a flow.

CHAPTER 2. RELATED WORK 14 Figure 2.2: Data and normal values must be interpolated from cell vertices to points within a cell. Three linear interpolations are used to accomplish this; along edges, then across faces, then through the cell.
Icons, arrows, or ''hedgehogs" (103, particle animation (100, 108; streamlines or rib-bons (109; streaklines anti line integral convolution (LIC) (14~ are all common means of producing a visual representation of the movement of particles over time.
Hedge-hogs and LIC are most often applied to 2D representations as their presentation in 3D leads to a degree of visual clutter which makes the perception of spatial organi-zation and structure more difficult to interpret properly. Interrante and Grosh have presented a number of techniques (51, 52~ which are designed to enhance the correct perception of 3D LIC structures through the application of color and shading as well as by enhancing the appearance of the depth-order of the elements of the display with visibility-impeding halos.
In situations where experimental equipment or simulation produces scalar data which is three-dimensional and approximately continuous in nature this data is of ten described as volumetric. Visual representation of such volume data may be ap-proached in one of two principle manners, mainly depending on the character of the data itself. Some volume data will contain distinct boundaries, such as between muscle CHAPTER 2. RELATED WORK 15 Figure 2.3: Object-order volume rendering methods traverse the data set and de-termine the contribution of each element to the final image. Object-order methods may operate with front to back ( Uv,der operator) or back to front (Over operator) composition.
or other soft tissues and bone in 3D radiological imaging techniques such as Computed Tomography (CT), Magnetic Resonance Tomogra,phy (MRT), or 3D Ultrasound. In these cases the boundaries in the data may be represented as surfaces and surface extraction algorithms such as Marching Cubes ~~~~ (SPe appelldrX I3) may be lr8ed t0 convert these boundaries into geometric models. This geometry may then be treated in the same way as other 3D models, saved ofHine, edited, rendered and interacted with using traditional 3D graphics techniques and accellerated hardware.
In other cases the data is more amorphous, lacking such distinct boundaries;
for ex-ample MRT imaging of the brain. If surface extraction algorithms are not feasible then direct volume rendering (DVR.) methods may be used to generate visual representa-tions of the 3D data. DVR, algorithms include ray-tracing of volume data (10, 54, 69~, splatting ~118~, shear-warp factorization ~64~, Fourier-domain volume rendering ~74~
and hardware accelerated 3D texture rendering ~1, 13, 31). With the exception of Fourier-domain volume rendering each of these algorithms operates by compositing CHAPTER 2. RELATED WORK 16 the values of the elements of the volume data (voxels or cells) which lie on the line of sight (ray) behind each pixel.
Composition proceeds by adding up the values of the individual volume elements encountered and summing their contributions according to the intensity (value) and opacity of the voxel, which is determined by the application of a transfer function, according to an operator such as the Over. Under or Maximum Intensity operators.
The process of composition may proceed from front-to-back or back-to-front of the data and may progress in image-order (figure 2.1), pixel by pixel , as in ray-tracing or in object order (figure 2.3), voxel-plane by plane, as in splatting.
Fourier domain volume rendering can be described as a reversal of the data acquisition process. While Fourier domain volume rendering is able to quickly generate views from new orienta-tions of the data, the process loses the depth information in the images and is not able to produce visible-surface images as are the other DVR algorithms. Rather Fourier domain volume rendering produces X-ray like images of the accumulated intensities through the data. Here occlusion of internal elements is not as significant problem as is the interpretation of the images to distinguish individual components.
The remaining DVR. algorithms are capable of producing realistic visible surface renderings, or semi-transparent renderings where internal structures are revealed by reducing the opacity applied to specific segments of the data. Maximum intensity images of the data render only the brightest values encountered along the ray travers-ing the volume and are well suited to revealing internal structures which have been artificially highlighted.
Another alternative to transparency as a means of revealing internal structures is to apply cutting planes to remove information based on its location in the volume, rather than removing it according to its classification with reduced opacity.
Note that, each Of these solutions (rutting planes and transparency) reveal elements of the volume by removal (partial or complete) of occluding elements.
A more recent approach for providing internal access to regions of a volume rendered image comes from the work of Kurzion and Yagel on Discontinuous R,ay-Deflectors [61, 62]. We examine the operation of ray deflectors in more detail in section 2.5.1 of this chapter.

CHAPTER 2. RELATED WORK 17 2.2.2 Information Visualization In moving towards the use of three spatial dimensions in information visualization we expand the repertoire of spatial and visual variables available to us in the construction of a representation. The addition of a third spatial variable creates a volume in which to arrange elements rather than the line or plane we were restricted to in using one or two dimensions for layouts. An understanding of the differences in the perception of volumes versus areas is important in the formulation of representations of information where volume is used to encode information. Tufte (103. 104, 105 provides many of examples of the misrepresentation of quantities in visual displays of information that may arise from the careless application of three-dimensional features to an information representation.
The cOllectlVe Bpa(',e In whlCh We repre8ent arid eXperlenCe VlSllal abBtraCtlonB Of information has come to be known commonly as "cyberspace". This term was coined in a 1984 novel, Neuromancer, by William Gibson. ' Cyberspace has also been termed Benediktine Space, after the work of n~lichael Benedikt who described the characteristics and principles of such a space (6~. In describing this space, Benedikt identified the intrinsic and extrinsic spatial dimensions of components arranged in the space. The extrinsic dimensions of an object specify a point within space, while the intrinsic dimensions specify the object's attributes: color, shape, texture and size.
This formulation of intrinsic and extrinsic variables is similar to the spatial and visual variables identified by Bertin in (8~. Benedikt also describes a number of principles for the distribution of elements representing abstractions of information within such a space.
"Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation.... A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights;
receding ..."
- William Gibson, Neuromancer, 1984 CHAPTER 2. RELATED WORK 18 The realm of information visualization has produced a host of approaches to the vi-sual representation of abstract forms of information. In order to structure and analyse these meth0(1S, as Well a.S prOVlde a b~.SI,S fOr flltllre apprOa(',heS, SPVeral classification shemes have been proposed.
Cognitive and Structural ~ameworks VViss and Carr examine the cognitive features of a number of information visualiza-tion systems [121. The specific cognitive aspects considered are: those concerning the methods employed to draw attention to significant elements of the data, the means of supporting information structuring and information hiding through abstrac-tion and the affordances which the system offers users as a means of interaction.
In analysing the cognitive aspects of information visualization systems the authors present a second, structural, framework for the cla,SSification of such tools.
The four broad categories of designs identified are: node-link styles (SemNet ~33~, Cone and Cam Trees ~92~, Hyperbolic Space ~GS~ and SeeNet3D (28~), raised surface styles (Perspective Wall (72~, Document Lens ~91~ and 3DPS ~19~), information landscapes (File System Navigator, Harmony ~2~, SDM ~26~, Bead (25~ and WebForager ~17~) and "other" designs (WebRook (17~, Tnformation Cube ~86~ and n-Vision (35~}. While many information visualization systems support interaction with the user via control panels, the authors stress their belief in the importance of direct manipulation; their analysis of the cognitive affordances of interface designs is approached with this in mind.
While the focus of their work is an analysis of the cognitive issues , the structural classification is equally interesting to our own work. The node-link style and "other"
categories include information visualization tools in which the layout of information is three-dimensional. The raised-surface family of interfs,ces is applics,ble to informa-tion which is principally linear (in the case of Perspective Wall) or planar in nature.
While these are 3D information visualization tools, the third dimension here fulfill a role in attribute emphasis (detail-in-context viewing) rather than as a spatial aspect CHAPTER 2. RELATED WORK 19 attributed to the data itself. The information landscape group of interfaces repre-sents visualization systems where the organization of data is principally 2D
but the of individual entities are abstracted as 3>7 representations of objects on a landscape.
In the cases of the raised-surface and landscape interfaces the restricted layout means we can readily fixxd viewpoints which xninirnize the effects of occlusion. How-PVer, the 3D layouts of node-link designs such as SemNet (33~ arid "other"
desrgns such as the Information Cube (86~ imply that occlusion may be a problem for which some solution other than choice of viewpoint is required.
Task Oriented Frameworks Other schemes for the classification of information visualization methods have ap-proached the process from the analysis of the tasks performed while interacting with the VlSllalrZatlOn SyStemS. Card (16~ identifies four functional levels in the process of "information perceptualization" . These four levels are: the infosphere, the workspace, sense making tools and the document. Approaching the description of a visualiza-tion tool from this perspective allows Card to separate the function of the tool from the technique itself, as a specific technique may be applied across several of these functional levels.
The infosphere in this scheme is the space of all available information sources, such as databases and documents. Tools for the visualization of the infosphere arc capable of providing overviews of this space and perhaps incorporating a semantic or spatial organization of the resulting structure. Narcissus (46~, Hyperbolic Space (65~
and ''worlds within worlds" (35~ are all presented as examples of infosphere-level vi-sualization tools.
The workspace level actions involve interactions with groups of objects that have been arranged in such a manner a.5 to make the completion of certain tasks more efficient. The role of visualization tools in this situation is to improve the efficiency of interaction with the structure of the workspace. This can be accomplished through the utilization of faster perceptual or pre-cognitive attributes of objects rather than cognitive properties and by increasing the information carrying capacity of the display CHAPTER 2. RELATED WORK 20 as a whole. The use of zooming interfaces in workspaces such as Pad++ (~~ and multi-scale interfaces such as Bifocal-Display (99] to provide detail-in context presentations are other means of making interaction with the workspace more efFicient.
Sense making-level tools are those which assist in the understanding of information through the creation of combinations and associations. These tools may be static or dynamic. in presentation but are intended to reveal patterns in information. For example, the cone tree (92~ turns part of the WWW into a tree according to a traversal algorithm, and the Table Lens system (85~ presents a detail-in-context display, which supports interaction with the rows and columns of a worksheet to reveal patterns in the data.
Finally, document-level tools interact with the elementary units of information retrieved. Within an individual document the contained information itself may be large and poses interesting internal structure.
Shneiderman (98~ formulates a taxonomy incorporating both the data type and the proposed task. Where Card presented four task levels, Shneiderman provides seven more specific task descriptions: overview, zoom, filter, details-on-demand, relate, his-Cory, eXtraCt. The examination of the relations between these two schema promdes a richer illustration of the process of interacting with an information space.
In the infosphere we wish to have an overview of the entire information space. We zoom and filter this space in order to construct workspaces. VVithin these workspaces we relate information elements to discover patterns in the process of sense making, and by maintaining a history of actions we may retain previous patterns to aid us in the construction and discovery of newer ones. Finally we want to be able to extract details and sub-groups to support document level investigations and abstractions.
Shneider-man also presents his information seeking mantra "Overview first, zoom and filter, then details-on-demand."
Shneiderman also identifies seven data types in the classification of information vi-sualization systems: one-, two-, three-dimensional, temporal, multi-dimensional, trees and networks. Shneiderman recognizes that many information visualization systems are oriented towards dealing with a specific class of data, for instance:
Geographical Information Systems with 2-dimensional data, Lifelines (76~ and Perspective Wall (72) CHAPTER 2. RELATED WORK 21 with temporal data, Cone and Cam Trees (92~ for tree data and Seml\et (33~ and re-lated tools for network data. The author beleives that a truly successful information V1S11ahZatlOn SyStPIlIS W11I have to be designed t0 accommodate seYPraI
classes of in-formation and the full range of tasks simultaneously.
2.3 Structural ~amework for Visualization Design We propose a simple classification system for 3D visualization systems, whether sci-entific or information, based principally on the characteristics of the resulting visual representations, rather than on the characteristics of the data. In this respect our classification is somewhat more similar to that outlined by Wiss and Carr (121 than that of Shneiderman (98~. We identify three principle characterizations of visual rep-resentations: discrete; contiguous and continuous. These three choices are motivated to some degree by the specific challenges they post in the application of our occlusion resolution tools. which we will examine in further detail in Chapter 4. We would describe the characteristics of each class of representation as follows.
(a) Discrete (b) Contiguous (c) Continuous Figure 2.4: Structural cla,~.ses of three-dimensional representations Discrete information layouts are exemplified by node and edge structures or 3D
scatter-plot layouts. Information representations of this class are characterized as CHAPTER 2. RELATED WORK 22 having spatial ordering relationships where connections that exist within the structure are represented by explicit connectivity (edges) rather than physical adjacency.
Contiguous information representations, include 3D models, finite element sets, CAD data and so on. In these models not only is spatial ordering important but so are the physical properties of adjacency and containment. Distortions which are applied to this class of information representations may benefit from a treatment that accounts for collision detection. Performing Layout adjustments which result in components translating through each other would otherwise violate the perception of the components as comprising solid surfaces.
Continuous representations may be truly continuous, as the product of 3D para-metric equations producing a volumetric, function, or the may be such finely discretized data as to appear continuous in some sense, such as volumetric medical imaging., geophysical or fluid-dynamics data. These kinds of data are generally tackled the approaches described in section 2.2.1 2.4 Methods for Occlusion Reduction In previous sections we have briefly discussed the approaches that various visualization systems have used to reduce the occlusion of elements within a representation.
In the following discussion we will present these traditional approaches in more detail and discuss their operation independent of specific visualization tools.
2.4.1 Navigation In the case of relatively sparse layouts of 3D information, movement of the viewpoint is a common solution to the situation of nearer objects occluding more distant ones of potential interest. The ability to move the viewpoint or re-orient a model is a com-mon and now expected feature in any system which employs interactive 3D
graphics.
Beyond offering users the opportunity to find solutions to instances of occlusion., the ability to produce movement is a powerful means of enhancing the perception of a 3D
structure. Figures 2.5(a) and 2.5(b) illustrate two different views of a skeletal model CHAPTER 2. RELATED WORK 23 of a human foot. In figure 2.5(a) the first metatarsal bone (which is highlighted) is hidden from the current viewpoint through occlusion by other bones in the model. In figure 2.5(b) the viewpoint has been rotated about the center of the model in order to move to a new position from which the view of the metatarsal is no longer occluded.
t (a) Occluded View {b) non-occluded view Figure 2.5: Moving the viewpoint makes the highlighted bone (the first metatarsal) occluded in (a) visihle in (b).
This movement is generally passive movement of the data space, through input with a mouse or similar device. The change of VP may also be accomplished by active movement of the viewer, along with appropriate tracking technology to automatically update the view. In the passive movement approach the metaphor of a turntable is often employed. Movement of the VP is about a central fixed point of reference and the view remains directed at this point. Commonly the movement of the data affords complete revolution about the vertical (y) axis and limited movement of the VP
up and down around the horizontal (x) axis. Some means of zooming the view in and out from the point of reference is also common. Generally zooming is accomplished by the movement of the VP towards or away from the center of the data set, rather than by means of narrowing of the field of view which is the more common definition of zooming in optical terms. The correct term for the movement of the viewpoint. in and out in this manner is dollying. Such movement of the viewpoint into an out of a structure does offer a simple means of moving past occluding objects to produce CHAPTER 2. RELATED WORK 24 a clear view of those previously obscured. In figure 2.6(a) the node in the center of the 93 element 3D-graph is highlighted but almost entirely occluded by other nodes in the structure. Here movement of the VP to another external viewpoint is unlikely to provide a resolution of this occlusion. In figure 2.6(b) the viewpoint has been moved into the structure, past the occluding elements to provide a clear view of the highlighted central node. The Slde effect here 18 the loss of much of the structure from the display as it is outside the field of view or now behind the viewpoint.
(a) External Viewpoint (b) Internal Navigation Figure 2.6: Movement of the viewpoint into the structure puts elements that occluded the node of interest in (a) behind the viewer in (b).
2.4.2 Partial Transparency A phenomenon closely related to occlusion is the partial occlusion produced by semi-transparent or silk, surfaces. These surfaces do not completely obscure more distant objects and have been widely used in computer graphics and visualization to pro-vide a sense of solid 3D structure with minimal loss of information through occlusion.
Cone Trees (92~ and Spiral Calendar ~73~ each applied partial occlusion to improve the appearance of the spatial structure of the visual representation. Partially transparent CHAPTER 2. RELATED WORK 25 cutting planes have been applied in surgical visualization tasks to facilitate the ori-entation of the plane with the organs intersected [124. Such see-through surfaces are also seen in 2D information spaces as a means of creating a 2.5D space for example as in Tool Glasses [9J and Silk Cursors (122, 123. The work of Zhai et al. on 3D silk cursors (124 indicates the effectiveness of partially transparent cursors in six degree of freedom (6DOF) tracking tasks. Silk cursors were found to be more effective than wire-frame cursors in 3D localization tasks in both mono and stereo viewing.
Figure 2.7: Partial Transparency In task domains with increasingly complex surface topologies partial transparency increases the difficulty of perceiving the shape of the surface and makes the distinc-tion of multiply layered surfaces more difficult. For example in figure 2.7 partial transparency makes the interpretation of the details of the distinct,layered, surfaces more challenging. The work of Interrante et al. in [50, 53~ examines the application of contour-driven textures to improve the comprehension of such structures, at the expense of increasing opacity.
2.4.3 Culling Cutting planes and volumes have long been a standard feature of direct volume ren-dering systems. Cutting planes are a highly effective means of providing visibility of CHAPTER 2. RELATED WORK 26 voxels adjacent to a plane through a volumetric data set or at a specific location within a less dense representation of discrete components or surfaces. Cutting volumes are used to remove more. complex regions from a display, rather than entire half spaces.
These volumes define regions that are removed in a manner similar to constructive solid geometry (CSG) subtract operations. Figures 2.8(a) acid 2.8(b) illustrate the effect of a cutting plane and a cutting volume on a volumetric, data set.
Cutting planes and volumes have the effect of remedying the occlusion of areas or regions of interest at the expense of removing other information from the final presentation.
There are additional costs in terms of the complexity involved in the specification of the placement and orientation of cutting planes encountered in practice.
Many methods have been examined to address this problem of specification of cutting plane placement including two-handed interactions with props to simplify these tasks. (48~
(a) Cutting Plane (b) Cutting Volume Figure 2.8: Cutting planes and regions remove volumetric data in a half space (a) or sub-volume (b) from the final image and make previously occluded surfaces visible adjacent to the cut.
Segment removal from direct volume rendering is another method for the reduc-tion of occlusion. This is analogous in 3D information visualization representations to the application of a DOI function to filter elements out of the final presentation.

CHAPTER, 2. RELATED WORK 27 L ~ ~ ~.r ~~ ~~ t~a y a ~;t?~ ~ \S~, Figure 2.9: Selective removal of component groups improves visibility of remaining components.
In volume visualization the application of transfer functions determines the color and transparency of a specific component such as skin, bone, muscle, or other internal organs. Individual component alpha values may be adjusted (lowered) so as to re-move component elements an<i reduce the effect of occlusion on those remaining. In figure 2.9 we have reduced the opacity of the outermost layer in the structure to zero in order to achieve a clearer view of the remaining two internal components.
2.5 3D Deformation Methods Methods for the deformation of 3D models are intrinsic tools in the production of interactive computer graphics. Deformation of models may be in the context of a simulation of a physical process, in producing animation for film or television or in interactive graphics for computer games. Many techniques have been developed and applied across a range of 2D and 3D graphical structures. Overviews of this field and the details of many of these techniques can be found in most computer graphics textbooks, notably (36, 41, 116.
There is a comparatively small set of methods for the transformation of graphical objects which arc of specific significance in their relationship to the techniques which CHAPTER 2. RELATED WORK 28 we will develop through this work. The most significant systems are the Discontinuous R,ay Deflectors of Kurzion and Yagel (62J the Zoom Illustrator work of Preim et al. ~82, 83, 87J and the Page Avoidance component of the Data Mountain system developed at Microsoft by Robertson et al. ~88J.
2.5.1 Space Deformation Operators Space deformation operators (61, 62, 63J provide both a mechanism for warping of 3D volumetric data or models and, v~lith the addition of discontinuous deflectors (62J, it is possible to arrange these operators in such a manner as to provide visual access to partial cut-planes through volumetric data sets. The operation of ray deflectors is first described in (61J. A ray deflector causes a locally constrained deviation in the path of a sampling ray and results in the apparent counter-displacement of the sampled surfaces (figure 2.10).
Figure 2.10: The effect of a warp operator on the path of a ray through a scene. The deflected ray results in the appearance of a deformation of the surface.
(After (61J) Discontinuous ray deflectors restrict a ray to sampling on only one side of a plane.
The result is that the outer surface of a volume is split and the data on that plane is made visible (figure 2.11) as the surrounding material is pushed aside.
The application of ray deflector operators to volume rendering with hardware assisted volume rendering is examined in (63J. The process of applying hardware texture mapping accelerators to the process of volume rendering is described in (1J.

CHAPTER 2. RELATED WORK 29 R, R, R, R2 Figure 2.11: Discontinuous ray deflectors operate by deflecting rays in opposite direc-tions from opposite sides of a plane. R.ay sampling is restricted to the original side of the plane, thereby producing a cutting and retracting form of distortion.
(After (63J) Kurzion and Yagel apply the inverse of the ray-deflection method to deform points in tessellated planes, thereby performing a piecewise linear approximation of the ray-deflection operation. The application of Discontinuous deflectors in this context leads to the problem of splitting the tessellated planes according to sampled and un-sampled vertices.
2.5.2 Zoom Illustrator Zoom Illustrator ~82, 83J extends the continuous zoom algorithm ~4, 32J from two di-mensions in order to apply it to interaction with three-dimensional models of anatom-ical structures. The effect of the zoom algorithm is to emphasize the appearance of objects of interest within the model by applying magnification to these elements.
In order to facilitate this magnification with the original space of the 3D
layout the surrounding elements of the model are reduced in scale accordingly to provide the nec-essary space. The interaction with such anatomical models as 3D puzzles is explored in the work of R.itter et al. (8?) in which a variety of 3D interaction and presenta-tion StrategleS are eXplOTed. TlleSe teChnl(lueS lnclllde the zoom algorithm as well as CHAPTER 2. RELATED WORK 30 the use of partial transparency and shadows to enhance the perception of the spatial relationships of the 3D elements.
2.5.3 Page Avoidance Data Mountain, developed by Robertson et al. ~88~, is a 3D document management system. The system was initially applied to the spatial arrangement of and interaction with Favorites or Bookmarked pages from a web browser, and has since been incor-porated into the tool palette of the Task Gallery system (90~. The Data Mountain allows for the movement of these pages, visually represented as textured images of the actual web page on rectangular polygons which remain perpendicular to the v iew direction.
An important part of the Data Mountain environment is the incorporation of a "page avoidance" behavior exhibited by the individual page elements. Each page maintains a minimum distance from all other pages in order to prevent one page from completely occluding another. The movement of one page by the user results in other pages moving out of its way. The movement of pages is propagated to similarly avoid occlusion of other pages. The Data Mountain environment in two incarnations (DM1 and DM2) is compared in a user study with Microsoft Internet Explorer 4.0 (IE4). The differences between DMl and DM2 include the addition in DM2 of page avoidance, stronger association of "hover titles" with pages, and the addition of spatialization effects to the accompanying audio feedback. The first significant result of this study to our work is that the DM2 users showed reliably as fast or faster retrieval times than the IE4 or DM1 users, with fewer incorrect retrievals. The second significant aspect is that in a subjective survey users expressed a preference for the DM2 system over IE4 while they did not prefer DMl over IE4.
The page avoidance algorithm of Data Mountain proves to have a strong resem-blance to the occlusion avoidance algorithm we will develop in this thesis when applied to similar discrete representation. ~'e will see a more detailed discussion of this ap-plication in Section 4.1.4 CHAPTER 2. RELATED WORK 31 2.fi Detail-in-Context Viewing Detail-in-context viewing for 2D information presentation has a history going back as far as 1981 with a Bell Labs technical report by Furnas ~39~ which introduced the notion of a "fish-eye" transform of a document and an associated degree of interest function. Subsequent work, notably by Spence arid Apperly (99~; extends the appli-cation of these ''fish-eye" views to graph layout and later to general raster images by Carpendale et al. ~19, 20~.
2.6.1 Perspective-Based Fisheyes for 2D layouts A 3D metaphor for the creation of detail-in-context views of 2D information repre-sentations was first demonstrated in the Perspective Wall system developed at Xerox Pare by Card et al. The Perspective Wall and derivative systems generate magnifi-cation and compression of layouts by manipulating a 2D surface in three dimensions.
Pulling region of the surface up towards the viewpoint, in conjunction with the effect of perspective distortion makes that part of the surface appear larger.
Perspective VVall is designed to work with data representations that are 2-dimensional but have a dominant horizontal dimension such as: timelines, digital video, and visual represen-tations of digital audio.
The data is arranged on the wall and a detail-in-context view is obtained by pulling a section of this wall towards the viewpoint in a 3D perspective viewing projection (as observed by ~33~ an orthographic projection would not produce the same effect).
The section of the wall which is pulled towards the viewer remains perpendicular to the line of sight. Were it not to do so apparent magnification across its surface would not remain constant as the magnified area would cover a range in z (depth) values. The remaining segments of the original wall remain attached at the left and right sides of the magnified region and appear to recede away from the viewer to the original depth of the wall. These bracketing regions are therefore at an oblique angle to the line of sight and the apparent magnification at any point on the wall in the contextual region depends directly on the depth in z of that point and inversely on the angle between its normal and the line of sight to that point. Figure 2.12(a) CHAPTER 2. RELATED WORK 32 Viewpoint (a) No Region of Interest {b) Flat Surface Figure 2.12: The base configuration of the Perspective Wall with no regions of interest {a). The surface has one dominant linear dimension and is at a constant depth in z in the perspective viewing frustum (b).
illustrates teh perspective wall in a neutral state with no focal region, figure 2.12(b) shows the relative configuration of the perspective wall surface, the viewpoint and the perSpeCtrVe VleW volume. In figllre 2.13 a regrOn In the center Of the peTBpeCtlVe wall is selected as a region of interest and pulled up towards the viewpoint in order to produce a detail-in-context view.
The Perspective Wall was soon followed by a related system from Xerox PARC, the Document Lens ~91~. In the Document Lens the data space is more traditionally 2D, and does not assume a principle, dominant dimension in which the data is mainly linear. The document lens is so named as the initial application of the system was for browsing a document corpus and the focal region is generally defined as a single page in the set. In this system there are contextual regions to the top and bottom of the focal region as well a.S t0 lts left and right. The dlstorted surface appears t0 take on the shape of a truncated pyramid. The document of interest forms the top of the pyramid and the surrounding, contextual documents form the sides of the pyramid.
The distortion is constrained to the extents of the original (flat or undistorted) layout, hence one or more sides of the pyramid may take on a more distorted appearance than CHAPTER 2. RELATED WORK 33 Viewpoint (a) Central R.OI (b) R.OI Pulled Up Figure 2.13: Perspective VVall with a single R.OI specified in the middle of the field of view, generating a region of increased scale and surrounding distorted regions (a).
The ROI is at a constant depth in z with respect to the viewpoint in the perspective viewing frustum (b).
the others as the focal area is moved towards or away from a particular boundary.
Both Perspective Wall and Document Lens engage human perceptual abilities in interpreting the 3D metaphor of perspective distortion. The goal is to support, com-prehension of the effect of distortion applied in order to obtain the detail-in-context view. The use of the perspective view-volume introduces some particular constraints on the manner in which the surface may be distorted. Figure 2.14 illustrates the fea-tures of the perspective viewing frustum. The perspective view volume in computer graphics forms a pyramid. The boundary of the pyramid is defined at its vertex by the viewpoint and at its base by the far plane. The pyramid may have a square base but more generally it has a base which is a rectangle, the relative dimensions of which are defined by the width and height of the viewing window; the ratio of the width to the height defines the aspect ratio. In the traditional computer graphics pipeline the perspective projection transformation is followed by a viewport to screen trans-formation which may change the relative width and height of the image, hence the aspect. ratio of the final rendered image may not be identical to that of the perspective CHAPTER 2. RELATED WORK 34 view-volume. In fact if a canonical perspective view volume is defined to aid in the process of 3D clipping, then it ~~ill have a square base (aspect ratio of 1) and the viewport transformation will restore the appropriate aspect ratio.
Viewpoint pth field Figure 2.14: The features of the pcrspectivc vcwing frustum. The frustum forms a pyramid with the viewpoint at the apex. The far-plane forms the base of the pyramid.
The width anti height of the pyramid are usually defined loy the field of view anti the aspect ratio. The field of view is the angular horizontal width of the pyramid, and the aspect ratio defines the relationship between the width and the height. (a =
~Zdth ) )aeiyht ' Objects within the pyramid are visible in perspective projection if they are located between the near and far planes in depth. The central axis of the pyramid defines the direction of the view in world coordinates.
The geometry of the pyramid imposes some restrictions on where a surface element may be moved to and remain visible in the final image. Notably, an element lying on the line of sight (a line vertically through the center of the pyramid) will remain in the center of the field of view as it moves along this line of sight and perpendicular to the _. ___~~.~~ height width CHAPTER 2. RELATED WORK 35 base of the pyramid, tow ards or away from the viewpoint. Conversely an element that lies to one side of this line of sight, also moving perpendicular to the base (translation in z only with no change of x or y coordinates) will appear to move away from the center of the field of view as it moves towards the viewpoint. Thus a focal region in Perspective Wall or Document Lens that lies in the center of the field of view requires no special consideration. Should the region be centered anywhere else then moving it in z to produce magnification will have the undesirable effect of an induced translation which Rill move the region out of the field of view. The solution to this problem, both in Perspective V~all and the Document Lens, was to shear the view volume such that the viewpoint was moved directly over the center of the focal region as illustrated in figure 2.15. Regions may now be translated simply in z and remain within the field of view. This choice of viewpoint movement does introduce a further restriction on the range of views which may be constructed. As the perspective viewing volume has only one viewpoint the requirement that it be positioned directly over any focal regions thereby restricts the system to a single focus.
Vewpoint (a) Off Center ROI (b) 'I~anslation of VP in T
Figure 2.15: Geometry of the frustum is sheared in x to keep the Viewpoint directly over the offcenter ROI.
Mufti-focal distortion viewing via the effect of perspective distortion was intro-duced in ~19~ with Three-dimensional Pliable Surfaces (3DPS), which later became CHAPTER 2. RELATED WORK 36 known as an example of an Elastic Presentation Space (EPS) (23~. 3DPS
implements a system by which regions of interest are translated not in z-only, perpendicular to the ba.5'e Of the perSpeCtlVe Vlewlng VOhlme, bllt along sight-line-aligned Vectors. The llse of sight-line aligned distortions provides for the specification of multiple focal regions within an information presentation space anti the process of hlellding affords a means of controlling the interaction of these multiple regions.
(a) Vertical Movement (b) Viewing FW storm Figure 2.16: Effect of simple vertical movement of a portion of the surface in an off center lens after perspective projection (a). The reason is that the surface now eXtendS OutSlde Of the perspective viewing frustum (b).
Figure 2.16(a) is an example of a 3DPS surface with a focal region at the right edge of the plane, with the region pulled up perpendicular to the presentation plane.
Figure 2.16(b) 1S a Slde VIeW Of thlS SltuatlOn 111u8tratlng the Way the focal region moves outside of the viewing frustum. The solution in 3DPS to this situation was to shear the focal region back towards the viewpoint rather than shearing the viewing frutsum itself as we sec in figure 2.17. This, along with a method for mediating the interaction of focal regions through blending, allows for the simultaneous specification of multiple focal regions as in figure 2.18. The construction and application of these distortion views are described in detail in (19, 22~ bllt will be revisited in Section 2.6.1.

CHAPTER 2. RELATED WORK 37 (a) Corrected Movement (b) Viewing Frustum Figure 2.17: Shearing the distorted region so that it is oriented towards the viewpoint (a) brings the entire extent of the lens back into the projected image(b).
Mathematical Framework for Layout Adjustment VVe begin from the observation that the integration of the region of interest, the surrounding contextual region of compression and the region of the presentation space which remains undistorted rnay be determined by the specification of the profile of a cross-sectional curve. This cur~~e is used to ,join the region of interest, which has been pulled towards the viewpoint and is at some height h with respect to the original layout, with the plane of the original layout. The characteristics of this curve will determine the distribution of compression within the contextual regions and the nature of the connection of this region to the focal and original layout areas.
Perspective wall and Document Lens employed a linear segment to connect the focal region to the original plane of the image. (Perspective ~~Vall and Document lens both dlstrlbllted distortion from the edges of the focal region to the extents of the presentation space, subsequently leaving no undistorted regions) The mathematics governing the appearance of these distortions is examined in (67~.
While the context of that analysis is primarily 2D -a 2D transformations the use of CHAPTER 2. RELATED WORK 38 (a) Multiple Off Center ROI (b) Viewing Frustum Figure 2.18: Shearing the lenses rather than the viewing frustum allows for the spec-ification of multiple R.OI (a). The area of intersection of lenses must be blended to provide a smooth transition between the two shearing directions (b).
2D ~ 3D -~ 2D transformations is simply another framework for the conceptual-ization of the mathematical operations. The effect of the operations in the final 2D
projection are the same.
f (x) = e-ro.o~z (2.1) d - 2 (2.2) t(x) - x x f (x) (2.3) - f(x) xP, 10.0 a,2 t(x) = 2 _ e-io.ox2 (2.4) If we begin with the Gaussian curve as the cross sectional profile f (x); as in equation 2.1 and figure 2.19, the curve has a maximum height of 1. If we define a simple perSpeCtIVe VIeW-VOhlme With a viewpoint at a distance of 2 from the original CHAPTER 2. RELATED WORK 39 0.8 o.a 0.4 0.2 1 0.8 0.80.40.2 0.2 0.4 0.60.8 x Figure 2.19: The Gaussian curve f (~) = e-lo.ox2 used in Three-Dimensional Pliable Surfaces to provide smooth integration of the ROIs and original information layout.
o., 0.05 1 0.8 fi 0.4 0.2 0.4 0.60.8 0.2 1 x 0.0 0.1 Figure 2.20: After perSpectlve projection, the apparent transformation t,(x) of points on a surface transformed by the application of a gaussian lens with a maximum height of 1 and a v iewpoint distance of 2 from the original surface plane.
plane of the presentation surface, then we can determine the resulting translation t(x) of a point on this curve after perspective projection; as in equation 2.2, expanded in equation 2.4 and plotted in figure 2.20.

CHAPTER 2. RELATED WORK 40 0.5 1 O.B 0.60.40.2 0.2 0.4 O.BO.B
t x 0.5 -10.0x2 Figure 2.21: The displaced position of points d(~) _ ~ + x~ as a result of the 2-a gausslan lens after perspective projection.
d(~) _ ~ + t(x) (2.5) xe.-lo.oTz d(~) = x, + 2 (2.6) 2 - P-lo.ox rra(x) _ ~d(x) (2.7) e-10.0 xz ~2e-10.0 x2 2 -10.0 a. l ~z(x) = 1 + 2 _ e_lo.o~z - 20.0 2 _ e_lo.o~2 - 20.0 x 'e -lo.ox2 a (2.8) (2-a ) This produces a displaced layout d(~) as in equations 2.5 and 2.6 and plotted in figure 2.21. If we removed the effect of the transformation we would have d(x) = x.
Finally we determine the relative magnification or compression, m(~), of a region on this line as the derivative of d(~) with respect to x, as in equations 2.7 and 2.8 and figure 2.22.
Table 2.1 illustrates the relation between the shape of the 3D displacement curve profile, the resulting apparent displacement of points in perspective projection.

CHAPTER 2. RELATED WORK 41
6
7.4 7.2 0.8 7 0.8 0.60.40.2 0 0.20.4 0.6O.B
t x Figure 2.22: The magnification (with single-point perspective projection) and com-pression distribution as a result of the gaussian lens. rrz(x) = df x dx j y %'- ~ ~;~ \ -v i~+-_ ~ \ _ / ~ , ~\ ~y /~ i v \~ , i \' ~-+
~I
j %, r ' w,~ = : ii (a) Ll (b) L2 (c) L3 (d) L200 Figure 2.23: The progression of Lp distance metrics from Ll (figure 2.23(a)) to L200 (figure 2.23(d)) in two dimensions.
dLP(W y) _ (~~~~ + ~y~P)° {2.9) Another aspect of the specification of a transformation function in a system such as 3DPS or EPS is the measurement of distance from a point to the nearest focal points.
Carpendale deSCrlbeS the llSe Of a dlBtanCe mettle Other than simple Euclidean distance to affect the shape of focal regions in ~23~. The Lp distance metric provides a means CHAPTER 2. RELATED WORK 42 Const Linear Gaussian Hemi. Cosine ~ Tan ~ Inv Hemi.
f {x) t(x) ( ) 1 D m~~~m r~r r~r ~r~or mm- -~-~~~~I
2D Orth. i-iiiiiii 1C~ ~ ._" ~;'~,~,~~ ~' ~ i'.
1 ~~i ~ ~ ~i ~
2D Rad.
Table 2.1: A visual comparison of a range of magnification functions (Constant f (x) _ 1, Linear f (x) = 1 - x, Gaussian f (x) = a a , Hemisphere f (x) = sin(cosh(1 -x)), Cosinef (x) = cos(x * '-'), Tangent f {x) = 1 - tan (0.95*x~2) and Inverse Hemisphere 2 tan(0.95*z) f (x) = 1 - sin{cosh(1 - x))) and their properties of slope ~, apparent planar trans-lation t{x) and resulting magnification m(x) (within a perspective-distortion system such as 3DPS).
of varying the appearance of a focal region, varying the shape of its boundary from diamond-like to rectangular by adjusting the value of the p parameter in equation 2.9.
Figure 2.23 illustrates the effect of varying the value of p from a minimum of 1 in figure 2.23(x) to a maximum of 200 in figure 2.23(d).

CHAPTER 2. RELATED WORK 43 Magnification versus Displacement In (18~ the relative roles of magnification and displacement in 2D detail-in-context viewing are explored. We observe that the movement of elements along the line of sight for a given focal region produces a magnification effect due to perspective distortion.
We further note that in the case of a discrete 2D graph the is that locally dense layouts of edges bec',ome 1e.5'.5' dense at the eXpenSe Of 80111e COmpreB810n at SOITle Other location. Since it is possible to separate the action of displacement from the mag-nification of nodes in such layouts local layout adjustments may be applied to the problem of "cluster-busting" in dense regions of graphs. This capability holds true in 2D and 3D information layouts and has some utility in reducing the problem of occlusion in locally dense regions as illustrated by Keahey with layout adjustment of 3D structures in (59~

Chapter 3 Method We have ldentlfied In preV10l1S chapters that there has been a great deal Of work In the creation of 3D visual representations in visualization. At the same time there has developed a strong interest in the detail-and-context presentation of information in both 2D and 3D representations. Tools for the generating such presentations of data have been the principle focus of the work in this area. The suite of tools available for the generation of detail-and-context views of 3D representations of information is much smaller than those which operate on 2D data.
One of the principle differences that we face in dealing with 3D
representations is the issue of occlusion. When dealing with 2D displays of 2D representations we do not have to concern ourselves with elements of the information layout becoming hidden behind other elements. Adding a z component to the layout space introduces the possibility that SOIIle elements wlll become hidden. Extensions of classical detail-and-context viewing algorithms to 3D through the addition of a z component does not adequately address this situation.
~'e can however eXtend the appllCatlOn Of sOllle Of these 2D techniques t0 3D
in a manner that. does account for the presence of elements occluding the object. of interest. We will accomplish this by examining the process of generating 2D
detail-and-context views and identifying the specific elements of the transformation process which contribute to reduction in local information density the final layout.

CHAPTER 3. METHOD 45 3.1 Layout Adjustment in 2D
Our method for occlusion reduction and detail-in-context viewing of 3D
representa-tions has grown out of our previous work on the 3-Dimensional Piable Surface sys-tem (3DPS). 3DPS created detail-in-context views of 2D visual representations with sight-line-aligned distortions of a 2D information presentation surface within a 3D
perspective viewing frustum.
In 3DPS magnification of regions of interest and the accompanying compression of the contextual region to accommodate this change in scale are produced by the movement of regions of the surface towards the viewpoint (VP). The process of pro-jecting these transformed layouts via a perspective projection resulted in a new 2D
layout which included the zoomed and compressed regions.
The use of the third dimension and perspective distortion to provide magnification in 3DPS provides a meaningful metaphor for the process of distorting the information presentation surface. The 3D manipulation of the information presentation surface in such a system is an intermediate step in the process of creating a new 2D
layout of the information. In section 2.6.1 We S~,W that a transformation function from 2D -~ 2D
is possible, if we incorporate the effect of the perspective projection on the layout adjustment function.
If we concentrate on the 2D ~ 2D translation function t(~) we can apply this to reduce the local density of elements in a layout as demonstrated in ~18~. This effect of local density reduction is significant, as is the ability to separate the translation component of the lens from the magnification function when dealing with discrete structures. It is precisely this ability to reduce, or rather redistribute, the density of information in a representation that we apply to the problem of occlusion in 3D
representations.
3.2 Occlusion and the Sight-line In order for an object of interest in a 3D information representation to become oc-cluded; a second object must be positioned such that its projection overlaps that of CHAPTER 3. METHOD 46 the first as seen from the a specific location, the viewpoint. Furthermore this occlud-ing object must be located between the viewpoint and the object of interest.
These simple facts provide us with an insight into how we might seek to develop a solution by which we prevent the occlusion of objects of interest.
We will define the sight-line of a given object as the line segment connecting the center of that object to the viewpoint. It is in the neighborhood of this sight-line that other objects, which may occlude our object of interest will lie. As new objects of interest are defined or the viewpoint is moved to a provide a new presentation of the information layout the location of this sight-line within the layout changes, and the set of objects representing possible sources of occlusion will change as well.
The fact that it is only in this region, on or near the line of sight, that we. will find objects representing potential occluding objects is significant; if there are nn objects in this neighborhood, other than the object of interest, then we will have no occlusion.
What we arc looking for then is a method which will keep the region surrounding the sight-line clear of other occluding objects.
Cutting planes, positioned and oriented appropriately, could remove all of the data rn a repreSentatrOn betWPen the Object Of rntere8t arid the VIeWpOlnt. This would have the desired effect of keeping the region of the sight-line clear. However it would not support detail-in-context viewing of 3D representations.
Transparency too could be used to reduce the effect of occlusion on our ability to see the object of interest, at the expense of increased difficulty in the comprehension of the structure as a whole. There are additional costs in rendering transparent objects correctly with a graphics API such as OpenGLT'u; as the use of the z-buffer for visible surface determination is no longer sufficient. Transparency may also be increased to the point where the potentially occluding objects are essentially removed from the scene. This would accomplish the same effect as filtering.
Navigation of the viewpoint to a new location will define a new sight-line to the object of interest and change the set of potentially occluding objects. It may be pos-sible to find a new external viewpoint where there are fewer or no occluding objects between the new viewpoint and the object of interest. In denser information repre-sentations (i.e. volumetric data) or representations where the distribution of elements CHAPTER 3. METHOD 47 leads to regions of higher and lower densities (such as scatter-plots or graphs) it may not be possible to find such a new viewpoint. Another solution in this case is to ,fT~
into the structure, moving the viewpoint past potentially occluding objects.
This has the effect of shortening the sight-line and again reducing the potential set of occluders.
A side effect of this approach is that least of the data in the representation will now be outside of the viewing volume and thus culled from the presentation.
What we seek to do is leave as much as possible of the original structure of the representation intact. We develop a solution that constrains our actions to the neigh-borhood of the sight-line and acts principally on those objects which represent the most likely potential occluders of an object of interest.
3.2.1 Towards a Solution In order to construct our solution to reducing 3D occlusion we will begin with the 2D -3 2D translation function t(~} seen in equation 2.2. This function can be applied to the re-distribution of density around a focal point. in a 2D information representa-tion, as in (18~.
Tf we eXtend the SOllrCe Of this function from a point in a 2D representation to a point in a 3D representation we can extend the operation of the translation function from movement of elements in (x, ~) to movement in (x, y, z}. This simple extension is capable of producing the local density reductions observed in ~27~ and has seen some application to cluster-busting of 3D graph or node layouts ~59) but yields little benefit in more general visual representations where occlusion is a significant problem.
Table 3.1 illustrates the application of four well-knou~n 2D detail-in-context view-ing transformations and their extension to 3D through the addition of a z component to the algorithms. The first column of the table employs an orthogonal stretch algo-rtihm similar to that of the Bifocal Display of Spence and Apperly (99~. The second column illustrates the effect of a nonlinear orthogonal stretching algorithm similar to that found in Catgraph (JJ~, Multi-Viewpoint Perspective (77~ and the hyperbola of Hyperbolic Space ~65~. The third column is distinct from the first two due to the ap-plicaiton of a radial application of the layout adjustment and magnification function.

CHAPTER.3. METHOD 48 Table 3.1: Tllustration of the application of four common 2D detail-in-context layout adjustment approaches to 3D layout via simple inclusion of 3'''~ dimension. In the first row are examples of step and non-linear orthogonal stretching, non-linear radial displacement and non-space-filling orthogonal stretching. R.ow two illustrates the effect of moving from (~; y) to (~, y, z) for data and displacement function.
The third row shows the effect of the layout function without the accompanying magnification of nodes. R.ow four shows the displacement only efFect extended into three dimensions.
This function is similar to that employed in 3DPS ~19~ and Nonlinear Magnification Fields (57, 58~. The final column displays a step orthogonal algorithm similar to that in the Zoom family of interfaces ~4) and the more recent Shrimp Views ~101~.
Excellent surveys and examinations of the field of 2D detail-in-context views can be found in the works of Noik ~80~ and Carpendale ~23~.
The first row of the table shows the 2D magnification and translation functions applied in conjunction. In the second row these same functions are applied in 3D.

CHAPTER 3. METHOD 49 Note especially that the 2D layouts adjustment schemes which minimize white-space in the resulting layout maximize occlusion in the 3D case. R,ow three removes the magnification component from the algorithms and applies only the layout adjustment component. This approach simply translates the points in the graph in 2D
without adjusting their individual scale. The bottom row demonstrates that the layout trans-formations which produce clear paths across the data yield the clearest visual access of the central, focal, point. This improved visual access is still only available from a limited set of viewpoints, and the object of interest will still be occluded from many possible locations for the viewer.
3.2.2 Redefining the Focus The principle problem in such direct extensions of 2D detail-in-context trasnforma-tions to 3D is that they do little to resolve occlusion of the object of interest. As we have noted, in order to reduce occlusion we need to remove objects from the neighbor-hood of the sight-line. In the interest of maintaining a detail-in-context presentation of the visual representation we seek to accomplish this without the removal of infor-mation and with as little disruption of the overall structure of the layout as possible.
This constrained adjustment preserves as far as possible the original mental model of the 3D structure on the part of the user.
In Section 3.2.1 we treated the object Of lntereSt rtSelf as the SOllrce for the 3D
extensions of our traditional 2D layout adjustment algorithms. If, instead, we define the sight-line from the viewpoint to the object of interest as the source of the trans-formation function, then we can use a similar method to move objects away from the line of sight, rather than just away from the object of interest.
Figure 3.1 illustrates a 2D cross-section of this mechanism in operation. Fig-ure 3.1(a) shows the original configuration of the information layout, the object of interest (OOI) near the middle of the layout and the viewpoint (VP) at the lower right. The sight-line connects the OOI to the VP. Figure 3.1(b) shows the displace-ment vectors generated by the transformation function for the points lying on or near the line of sight. Distance of each point is measured to the nearest point on the line of CHAPTER 3. METHOD 50 (a) Original Layout (b) Displacement Vectors (c) Displaced Layout Figure 3.1: Operation of linear ORT in cross-section. Focal point and viewpoint define the line of sight through the structure (a). Distance of other elements to line of sight determines direction of displacement (b). The length of vectors in (b) will form the input into the function which determines the magnitude of the resulting displacement vectors. Final transformed layout produces clear line of sight from viewpoint to focal point (c).
sight. In determining these distances we also determine a direction vector, from the nearest point on the line of sight to the point being adjusted. Points are moved in the direction of these vectors. The length of the direction vectors forms the input to the transformation function. The result of this function is used to determine the degree of displacement for a point. Points closest to the line of sight are moved the furthest, and points originally lying further away are moved in successively smaller increments.
Eventually a smooth transition is made to points which were far enough away as to be unaffected by the transformation. Figure 3.1 (c) is the final configuration resulting from the application of the transformation function to the layout.
VVe will subsequently refer to operators such as this as Occlusion Reducing Trans-formations of the visual representation, or ORTs. The effect of an ORT is to provide a clear line of sight, or visual access, to an object or region of interest within a 3D
visual representation by adjusting the layout. The application of multiple ORTs may be composed on a representation by combining the effects of the individual ORTs on the elements of the representation.

CHAPTER 3. METHOD 51 We will refer to the sight-line of an ORT as the source of the function. Other definitions of the source are possible as we will see in Section 3.3. The source of the ORT is the location which elements of the representation will move away from as the ORT is applied. If we have a series of ORT operators O..n then a weighted average of the effect of each ORTi can be employed where the influence another ORT~ ( j ~
i) on a pOlnt dPCTPaSPS as the dlStanCe Of the pOlnt t0 the source of ORTi decreases. This means that for points where this distance is 0 the influence of the other ORT~
is also 0. Since the OOI for ORTi defines one end of the sight-line for it will be at distance 0 from the source of ORTi. We may also employ a simple average of the effect of each ORTa on an element; for all i = O..n, as we do in the following examples.
(a) (b) (c) Figure 3.2: Increasing degree of application of two ORTs to reveal two objects of interest (highlighted here with darker color) in a 3D graph layout.
Figure 3.2 illustrates the progressive, simultaneous, application of two ORTs to a 3D graph in order to reveal two objects of interest (blue), one near the front of the layout, the second nearly at the back (as seen from the current viewpoint).
Because the viewpoint is an integral component in its construction, the ORT remains oriented properly and continues to provide occlusion reduction as the viewpoint is moved.
Figure 3.3 illustrates the rotation of this representation without the application of ORTs to reveal the two focal points. Figure 3.4 shows the same sequence of motion with the ORTs in place.

CHAPTER 3. METHOD 52 (a) (b) (c) Figure 3.3: Rotation of the 3D graph to illustrate the occlusion of two objects of interest (nodes highlighted with darker color). A clear view of even the nearer of the two in the structure is available from only a limited range of viewpoints.
(a) (b) (c) Figure 3.4: The same three viewpoints and same two objects of interest (now high-lighted and increased in scale for emphasis) with the application of ORTs.
Even the node at the far side of the graph is v isible through the sight-line-clearing effect of the ORT.
In table 3.2 we return to the examples of the simple extension of the orthogonal stretch, nonlinear orthogonal and nonlinear radial algorithms to 3D
representations.
The first row of table 3.2 shows the same 3 figures as the first three columns of row two in table 3.1. The bottom row in table 3.2 illustrates the effect of an ORT
on the CHAPTER 3. METHOD 53 Ortho. StretchOrtho. lion-Linear~ Radial. Non-Linear _:~~~w 3D no ORT

3D with ORT

Table 3.2: The effect of adding the effect of an ORT to the 3D extensions of some common 2D layout adjustment schemes for detail-in-context viewing. Tn the first row of images we see the simple extension of the approaches to 3D, the central focal point is even more occluded than before the layout adjustment in most cases. The second row adds the operation of an ORT to clear the line of sight from the viewpoint to the focal point.
layout, providing visual access to the previously occluded nodes of interest.
3.2.3 ORT-Relative Coordinate Systems The sight-line is the simplest primitive we can employ to produce an effective ORT.
Likewise the nearest-point measurement in Euclidean space is the simplest distance measurement. In order to facilitate the description of a wider range of ORT
operators we will construct a local coordinate system {CS) for each ORT. The creation of a local ORT coordinate system requires two vectors and a point in three dimensions from which we can derive the position and orientation of the ORT CS relative to the world CS.
We will call the location of the object of interest associated with an ORT the focal point (FP). We will use this end of the sight-line as the location of the origin of the ORT CS. The direction from the focal point to the viewpoint will form one of the CHAPTER 3. METHOD 54 two vectors needed in order to orient the ORT CS. For the second vector we will use the UP vector in the viewer, or camera, coordinate system. Typically this direction is positive y, < 0,1, 0 > or "up" in the world CS. Tn order for the OI3T CS to be properly defined we must ensure that the vectors VP-FP and UP are not parallel.
ORT CS
~ ORT CS
World CS .C Worid CS ,-Line of Sight / ~ / Line of Sight Camera CS Camera CS
(a) (b) Figure 3.5: Annotated framework of diagrams illustrating the relative shape of a selection of ORT functions. On the left (a) the ORT Coordinate System (CS) z-axis is aligned with the World CS z-axis, on the right (b) the camera position (VI~.P) has been moved and the ORT CS is re-oriented to track the change.
With these elements we can construct a coordinate system that is centered on the focal point; with the positive ~ axis oriented towards the viewpoint. By this construction the x = 0 plane of the ORT CS contains the UP vector from the world CS. A rotation of the ORT CS around the sight-line is a simple matter of rotating the L; P vector around the VP-FP vector in the world CS and using this rotated vector in the construction of the ORT CS. Figure 3.5 illustrates the configuration of the ORT
CS, World CS and viewpoint, or camera CS.

CHAPTER 3. METHOD 55 3.3 Distortion Space In order to determine the effect of an ORT on a layout we transform each point p, (x, y, z), into the ORT CS via an affine transformation. This yields a new coordinate p'; (x', y'; z') in the ORT CS. If the value of z' is greater than 0 then the point is somewhere between the object of interest and the viewpoint. In this case the distance is measured in the x~ plane of the ORT CS only, which measures the distance of the point to the sight-line. If the value of z' is less than zero then the point is further away from the viewpoint than the object of interest.
The advantage of the ORT CS is that the description of more complex distributions of ORTs is greatly simplified. Any transformation that will produce a reduction of the element density along the positive z axis of the ORT CS will achieve the desired result of occlusion reduction.
Thus far we have seen one distribution of displacements that we can characterize as having a linear-source and being truncated. This ORT operates relative to sight-linc, the z axis in the ORT CS, and its distribution is truncated on the far side of the object of interest from the viewpoint. This produces the a cylindrical region of effect where the far end of the cylinder from the viewpoint blends into a hemispherical cap.
In addition to such a linear-source function we may also describe an ORT that is derived relative to either the y = 0 or x = 0 plane of the ORT CS. Each of these planes contains the z axis of the ORT and therefore displacements of points away from these planes VVlll reduce OCCIuSlOn along the Slght-line a.R well aS aCTOSS
the plane.
It 1S also possible to apply a transformation relative to all of the cardinal axes Or planes of the ORT CS in the same manner as they may have been constructed relative to the cardinal axes of the World CS (Figure 3.6). If defined relative to the ORT CS
the deformations will remain aligned to the VR.P.
Truncating the distribution at the z = 0 plane is only one possible distribution of displacement through the depth of the ORT CS. We may also continue with the constant application of the ORT along the z-axis of the ORT CS , or linearly scale the application of the ORT so that it falls from it's maximum at the near side of the information layout to zero at the origin of the ORT CS, or at the back of the CHAPTER 3. METHOD 56 Figure 3.6: Schematic of orthogonal stretch ORT. Distance of points is measured to nearest of the 3 planes passing through the focal point.
Figure 3.7: Linear extrusion through z axis of the functions describing the operation of a detail-in-context layout adjustment scheme. The Gaussian curve f (x) - e-lo.o~2 forms the basis.
information layout. In figure 3.7 we see the Gaussian basis function extruded in z, in figure 3.8 the degree of the function is scaled to zero at the far end of the space.
If the ORT is defined as having a plane source, then elements of the representation will be pushed away from this plane by the action of the ORT. In this case the distribution of the ORT function across the plane, perpendicular to the z-axis of the ORT CS, may also be modified by a shaping function. This function controls the degree of application of the ORT in order to spatially constrain the transformation CHAPTER 3. METHOD 57 Figure 3.8: The same graphs now illustrating the effect of linearly scaling the appli-cation of the basis function according to depth in z.
(a) Horizontal Plane-relative (b) Addition of Shaping FLnction Figure 3.9: A secondary shaping function applied to the horizontal plane-relative ORT. Scaling in z is constant but the addition of the shaping curve can be used to constrain the extent of the plane-relative function in x.
and thereby preserve more of the original layout. These shaping functions may be any curve that modulates the degree of the ORT from a weight of 0, no effect, to l, the original effect without shaping function. Figure 3.9 illustrates the effect of a Gaussian shaping function on an ORT defined relative to the y = 0 plane. The extent in width Of the shaping function may be adjusted independent Of the degree of the OR,T.

CHAPTER.3. METHOD 58 (a) L1 (b) L1.5 (c) L2 (d) L2.5 (e) L5 Figure 3.10: >7istance measurement according to the T,p metric in three-dimensions.
dLn(~, ~~ z) _ (~x~P + ~y~p + ~z~p) p (3.1) As with the use of alternative distance metrics to achieve different distributions of layout adjustment in 2D -> 2D transformations, xneasurexnent of distance according to different metrics in 3D may be applied with a similar effect. For instance, we may elect to measure distance with an Lp, rather than Euclidean distance metric.
The conversion of a 3-dimensional point to measurement with the Lp metric is shown in equation 3.1. If the profile of the ORT is computed with an Lp distance metric where p = 1, then the ORT will have a diamond-shaped rather than round appearance.
Increasing the value of the parameter p well beyond 2 will shape the opening in a progressively more squared-off manner (Figure 3.10).
2 2 ~ l ',r, a eeo '(~ -a w ,~, n.s ds9(~~ cJ, z) _ ~ a I + I b ~ + ~ I {3.2) The use of a Super-Quadric distance metric for modeling with implicit surfaces was explored by Tigges et al. in (102. The conversion of Euclidean distance to Super-Quadric distance is shown in equation 3.2, where the ew and ns parameters control the shape of the space. In determining distance from a, source, varying these parameters provides independent control of the front-to-back and cross-sectional profile of the shape of the basic ORT function (Figure 3.3.
To summarize, in the description of an ORT operator we may define the basis function for the transformation, the source of the function; the profile of the ORT

CHAPTER 3. METHOD 59 Table 3.3: The SuperQuadric distance metric allows separate specification of the ns and ew shaping parameters to achieve a wider range of possible metric spaces.
Parameter Possible Values ~

basis linear; gaussian, inverse hemisphere, hemisphere, cosine, user-defined 0 <_ f (~) < 1 source linear; planar (horizontal, vertical or rotated by a degrees), cardinal axes, principle planes z distributionconstant, truncated, linear, short linear shaping curveconstant, gaussian, linear, etc. varying (0..1) distance Euclidean, Lp (p), Super-Quadric (ns,ew) metric Table 3.4: Parameters available in the definition of ORT operators.
function along the z axis of the ORT CS, the application of a shaping curve for plane-relative ORTs and the distance metric. We tabulate this parameter space in table 3.4.
Table 3.6 illustrates some of the range of such ORT descriptions via simplified CHAPTER 3. METHOD 60 Linear Linear (new VRP) Planar Planar (new VRP) ..
r , r j ,.
<~ ' c~=
i L
~~
_~ j i Table 3.5: Some of the space of ORT specifications possible by varying the source and distribution of the operator. The left column illustrates ORTs defined relative to the z-axis of the ORT CS, the right column illustrates ORTs defined relative to the y = 0 and x = 0 plane of the ORT CS.
schematic diagrams. In each figure the ORT CS, the world CS and the viewpoint (camera CS) are indicated by triples of arrows. Two orientations of the viewpoint a,nd ORT CS are displayed in the world CS for each combination of function source and distribution. The z axes of the ORT and world CS axe parallel on the left of each CHAPTER 3. METHOD 61 image and an oblique viewpoint is shown on the right. The left column of the table illustrates ORT operators relative to a linear source and the right column illustrates ORTs defined relative to the ~ = 0 or .r, = 0 plane of the 013T CS.

Chapter 4 Applications Having established a framework for the description of occlusion reducing transfor-mations (ORTs) of information layouts in 3D, we examine the particular details of applying these operators to a number of representations. We first return to our defini-tion of three broad categories of 3D information representations: discrete, contiguous and continuous (Figure 4.1).

/ T~ ~~\
\.:',. ~ C~ . \\>
v . , r ~t,= w i, ~ , .
''s ~' \:' ~y'7 ,~''\"~
(a) Discrete (b) Contiguous (c) Continuous Figure 4.1: Three classes of three-dimensional data representations Discrete information layouts include node and edge structures or 3D scatter-plot layouts. These may be 3D graph layouts; or molecular models in the "ball and stick"
format. Representations of this class were characterized as being spatially ordered CHAPTER 4. APPLICAT101VS 63 where adjacency in the structure is illustrated by connections, such as edges, rather than physical adjacency of the components.
The second category we defined, contiguous information representations, included 3D models, finite element sets, CAD data and so on. In these representations not only was spatial ordering important but so was the physical properties of adjacency and containment. Transformations of these representation involves consideration of these properties, as the translation of components through each other defeats the presentation of the objects as 3D physical models.
The last class we defined included representations of datasets that were essentially continuous in nature. That is the data may have beeen truly continuous, as the prod-uct of 3D parametric equations producing a volumetric function, or may have been such finely dxscretxzed datasets as to appear continuous, such as volumetric medical imaging, geophysical or fluid-dynamics data. These datasets were generally rendered with methods belonging to the field of volume rendering and present a specific chal-lenge in dealing with their large sizes.
Throughout the balance of this chapter we examine the application of occlusion reducing transformations to representations belonging to each of these three broad categories.
4.1 Discrete Data Representations Our fist class of 3D representations, discrete, is in some situations the class least susceptible to the effects of occlusion in a 3D layout. In relatively sparse represen-tations the likelihood of data elements being arranged in such a manner as to result in occlusion from any particular viewpoint is relatively low. There are a number of situations though in which the likelihood of occlusion becomes an issue.
Increasing the number of discrete data elements in a particular layout increases the likelihood that information elements will be laid out in such a manner as to cause an occlusion situation from any particular viewpoint.
Local density variations causes clustering in regions even of smaller discrete lay-outs. This phenomenon and the use of local scale adjustment to improve the situation CHAPTER 4. APPLICATIO~'VS 64 is presented in ~59~. In this system multidimensional data sets are represented as 3D
scatter plots against axes representing a three-dimensional frame of the n-dimensional data. A straightforward extension of Nonlinear magnification fields from 217 to 3D
is applied in order to increase the apparent size of local clusters of data and thus enhance tile visibility of the spatial characteristics of the data.
4.1.1 Regular 3D Graphs Structures In applying ORTs to discrete information layouts we first use the example of a element 3D grid-graph, as seen in figure 4.2. The regular spatial structure of this graph lends itself to illustrating the effect of layout adjustment, as well as providing a relatively dense, if uniform, distribution.
Figure 4.2: The original layout of the 9 x 9 x 9 3D grid-graph The 3D lattice graph in this application has simple connectivity relationships between nodes and thelr nearest neighbors in x, y and z. The edges of the graph are rendered to represent these relationships. VVe apply the turntable metaphor to interaction of the viewer in this system and the viewpoint will normally be found outside the bounds of the graph. For a structure of 93 nodes, a node in the center is CHAPTER 4. APPLICAT101VS 65 likely to be occluded by nodes in 4 other layers regardless of the choice of viewpoint.
(a) Data-aligned (b) Viewer-aligned Figure 4.3: The orthogonal stretch algorithm aligned to the principle planes of the data layout space (a) and aligned to the viewer as an ORT operator (b).
In these examples we color the nodes of the graph with a scale that ranges from light grey to blue. The degree of the change to blue being proportional to the displace-ment a node has undergone from its original location. This coloring has the effect of illustrating some of the shape and distribution of the ORT operator through the rep-resentation. As discussed in Section 3.3 a Wide range of combinations of ORT
function sources and distributions are possible. In figure 4.3(a) we see an orthogonal-stretch layout adjustment algorithm applied to the 93 graph as we saw in table 3.1.
Again the central node is the object of interest but here the remaining nodes of the graph are colored to illustrate the displacement they hare experienced. In figure 4.3(b) the same function is applied as an ORT operator, and now remains aligned to the viewpoint.
In figure 4.4 we present an ORT that is defined relative to the sight-line connecting the Ob,)eCt Of lntereSt t0 the viewpoint. In this instance the ObleCt Of lntereSt (OOI) 1S
the central node in the 93 graph and the distrihution of the ORT has been truncated at the position of the OOI. The shape of this function is similar to that illustrated schematically in the top-left image of table 3.5.
If the sight-line is extended through the node of interest then the ORT
results in a clear sight-line Which isolates the node against the background, we see this result In CHAPTER 4. APPLICATIO~'VS 66 Figure 4.4: The 3D grid-graph with the central node specified as the object of interest.
An ORT has been applied to reduce occlusion. Color of the remaining nodes in the graph represent the degree to which they have been displaced by the ORT. The darkest nodes have been moved the most.
figure 4.5(a). If the visual clutter of nodes behind the object of interest had interfered with its examination then this pattern of layout adjustment distribution may be useful.
The shape of this function is similar to that of the left image of row two in table 3.5.
Other possibilities include a tapered cone-like distribution of the ORT
function which is seen in figure 4.5(b). The shape of this operator is illustrated in the image on the left side of row four in table 3.5 will be of more interest in the application areas will discuss later in this chapter.
Choosing a plane containing the sight-line as the source of the displacement func-tion provides a means of interactively cutting-into the structure and having this "cut"
follow the sight-line as the viewpoint is moved around the structure. The simplest two cases of this form of ORT are vertically and horizontally positioned planes which produce vertical or horizontal cuts into the representation respectively (Figure 4.6(a)).
Here the truncated or tapered distributions are particularly effective, creating a book-like opening in the representation (Figure 4.6(b)). This method provides good visi-bility in the spatial neighborhood of the object of interest, more so within the plane CHAPTER 4. APPLICAT101VS 67 e~
(a) Constant (b) Linear Scaling Figure 4.5: Examples of constant and linear scaling of the application of the ORT
along the z axis of the ORT coordinate system. The constant scaling isolates the object of interest against an empty background while the linear scaling looks very similar to the line segment relative application.
than perpendicular to it. These images provide examples of the shapes of ORTs seen in the images on the right side of rows two and three of table 3.5 respectively.
4.1.2 General 3D Node and Edge Structures Rather than generate a set of more randomly arranged 3D graphs, we will use a ready-made set of examples from chemistry. Ball and stick models of molecular structures arc a common means of representing the chemical compounds, an example of such a structure is the caffeine molecule in figure 4.7. In many respects these structures are similar to 3D graphs, excepth that here the length of edges tends to be shorter and the number of edges incident on a node is limited by the bonding properties of the atom. That said, these models are used to represent complex structures where the geometry of the layout is potentially more pertinent to the interpretation of the CHAPTER.4. APPLICAT101VS fib (a) Constant (b) Tapered Figure 4.6: ORT functions applied relative to a horizontal plane through the object of interest. Objects within the plane remain in plane while those above a,nd below are displaced. In (a) the operator is data-axis relative, and does not track changes in the viewpoint. The operator in (b) is viewpoint aligned.
representation than in abstract layouts of 3D graphs.
As a relatively simple initial example we will deal with a model of the chemical composition of caffeine (Figure 4.8). This molecule consists of only 24 atoms a,nd 25 chemical bonds, so occlusion is not a particular problem here. This allows us to discuss the effects of the application of ORTs to this domain of representations. yVe will then see the application of an ORT to a substantially more complex chemical compound.
In these examples we represent the atoms as colored spheres. For example here we see carbon atoms represented as dark grey spheres, hydrogen as white, oxygen as red and nitrogen as blue. We select as our object of interest one of the oxygen atorns and apply a sight-line relative ORT function. We truncate the ORT at the depth of the OOI s0 as not t0 disturb the layout Of atoms on the far side.
lVow as the CHAPTER 4. APPLICATIO1VS 69 Figure 4.7: Caffeine Molecule: C8H1oN4na "'-' ~.
~, ;,i (a) (b) (c) (d) Figure 4.8: Movement of the viewpoint around the caffeine molecule without the application of any ORT functions.
viewpoint moves around the structure the other atoms are gently deflected away from the sight-line and return to their original positions as the sight-line passes by. This effect is illustrated by comparing the sequences of images in figure 4.8 with those in figure 4.9. In figure 4.8 the atom of interest is highlighted. Without the application of an ORT this atom is occluded as the viewpoint is rotated about the structure. In figure 4.9 with the application of an ORT the atom of interest remains visible.
There is a choice to be made here; whether or not to distort the edges representing the the bonds between the atoms. The relevant trade-offs are between the increased CHAPTER 4. APPLICATI01VS 70 ~r ,,--. ~,,.
(a) (b) (c) (d) Figure 4.9: The oxygen atom indicated in (a) is selected as the atom of interest for a linear-source ORT. The same movement of the vieulpoint is performed around the caffeine molecule and this atom remains visible as other atoms are deflected away from sight-line.
cost of rendering edges as a piece-wise linear approximations of the curved paths which they take through the ORT influenced space, and the detrimental effect straight edges may have when they are not subject to the effect of the ORT. The immunity of the edges from the ORT detracts from the effect of the ORT on the representation as a whole. In the current example leaving the bonds undistorted means that even if two atoms are displaced away from the sight-line in opposite directions, the bond connecting them may remain in place or be moved into place, in front of the object of interest. This may introduce a small amount of occlusion, but it may create a great deal of visual clutter in front of the object of interest.
As a more complex example we use the molecular structure of the vitamin B12 (Cg3HggCOI~1qO14P). here we select at random a particular oxygen atom as the object of interest. In figure 4.10 we apply an ORT, increasing the degree of application over several frames. We use the same locally constrained sight-line relative distortion function as in the previous example. The T)etailed views in figure 4.11 provide a clearer picture of the effect of this distortion on the local layout.
Other possibilities within this domain include the selection of chemical substruc-tures as objects of interest rather than individual atoms. For example a benzene ring CHAPTER 4. APPLICAT101'VS 71 ;~

~ . ,.

(a) (b) (~) Figure 4.10: Sequence illustrating the application of a linear-source ORT to the struc-ture of vitamin B12. The Oxygen atom selected as an atom of interest is in the region indicated by the overlay box.
(a) (b) (c) Figure 4.11: A detail view of the region indicated by the overlay box in the previous figure. The result of the successive application of a linear-source ORT to the (initially hiddcn~ Oxygen atom is illustrated.
may form a structure of interest that would be cleared of occluding elements and re-main undistorted as it's local neighborhood and relationship to the overall structure is studied.
More complex representations of molecular structures and particularly proteins arc common in the biochemistry. Protein structures form complex spatial folding CHAPTER 4. APPLICAT101VS 72 arrangements, the intricacies of which are of particular interest in the function pro-teins during biological processes. The convoluted structures of are often represented visually as ribbons that illustrate the winding, twisting and folding of the molecular chain which comprises a protein. These representations are often dense and involve considerable occlusion issues. An interesting future area of work would be to apply ORTs t0 the interactive investigation of these more complex visual representations.
4.1.3 Hierarchical 3D Graph Structures (a) (b) (c) (d) Figure 4.12: A selected leaf node in a cone tree layout of a directory structure is indicated by the overlay in (a). This node is brought to the front through concentric rotations of the cone tree structure; (b) through (d) Moving away from the biosciences and back to the realm of the information sciences we can explore the application of OItTs to one more form of 3D graph layout, cone trees. These structures provide a means of creating a 3D layout of a hierarchical information set. In a typical implementation of cone trees, specifying a node of interest within the structure leads to the structure being adjusted automatically such that the node of interest is rotated to the front-and-center position, as seen in figure 4.12. This works well in the case of a single object of interest but the mechanism does not readily extend to provide a means of dealing with two, arbitrarily specified, objects of interest.
If an additional step is taken in the interaction scenario then we can use ORTs to support interaction with multiple nodes of interest with the cone tree framework.

CHAPTER 4. APPLICAT101VS 73 One are for the application of cone trees is the display of directory and file structures.
If, in this case, a user is searching for a particular version of a file within the layout a scan of the. file system may yreld several potential candrdates. With single focus operation each of the files produced as a result of the search must be examined in a sequential manner. With the addition of multiple ORTs; each providing occlusion reduction for one of the search results, a mufti-focal 3D detail-and-context oVervrew is possible. This display facilitates the addition of more detailed information (file date, path, author...) to each result (either simultaneously if there are relatively few results or as the user hovers the cursor if there are too many results for simultaneous display).
(a) (b) (c) Figure 4.13: Two leaf nodes, labelled a and b in (a) are selected simultaneously.
Application of two ORT operators improves the visibility of these nodes without explicitly rotating one or the other to the front; (b) and (c).
Once the multiple objects of interest are defined navigation of the viewpoint is pOSSlble Whlle the ObleCtS remain visible, as illustrated In figure 4.14. This is an inherent. property of each ORT incorporating the current viewpoint. We see this from a secondary viewpoint in figure 4.15.

CHAPTER 4. APPLICATIO1VS 74 (a) (b) (c) Figure 4.14: Once ORT operators are attached to nodes a and b, in (a), these nodes remain visible during movement of the viewpoint; (b) and (c).
~' ~ '~
. 1~
~i } I~~ ~~;
(a) (b) (c) Figure 4.15: The area of influence and viewpoint alignment of the ORT
operators in the previous sequence, as seen from a secondary viewpoint. The ORT
operators remain aligned to the primary viewpoint as it is moved around the cone tree.
4.1.4 3D Desktop Environment Our final example for the application of ORTs in discrete information spaces is a 3D-desktop-style environment, seen in figure 4.16. To demonstrate this application we have implemented a prototype of such an environment on a Personal Computer running the Microsoft Windows operating system. As the prototype initializes it "grabs" images of each applications currently running on the users' desktop.
These CHAPTER 4. APPLICAT101'VS 75 images are attached to polygonal surfaces within the 3D-desktop environment in which the user is able to navigate by movement of the viewpoint with a turntable metaphor.
Within this environment the user can arrange the 3D windows by dragging them, as illustrated in figure 4.17. A single window may also be brought to the focal position, immediately in front of the viewer, where it appears at the same scale as it would on the users desktop.
Users cannot currently interact with the applications in this environment;
that Would require the construction of an operating system level redirection mechanism as Was descrlbed In the Task Gallery system [9U~. Three is also nn means of interacting with the operating system; either to launch new applications or terminate those al-ready running. In any case the development of this prototype was an effort to explore the application of ORT operators within such an environment, rather than the cre-ation of a fully-functional 3D-desktop system. Interactions within the environment are restricted to the arrangement of windows in three dimensions and navigation of the viewpoint.
After a user selects a window, either by clicking on it once or, when no window is currently selected, by hovering the mouse over it, that window becomes marked as an Figure 4.16: 3D Desktop environment CHAPTER 4. APPLICAT101VS 76 ..~,.;.,.: .,..._..
_.:.;., .r..-...
i (a) (h) (~) Figure 4.17: As the selected window is pushed back through a cluster of windows in the 3T~ desktop environment the cluster is dispersed in order to prevent occlusion of the selected window.
object of interest. Once a window is marked as an OOI an ORT function is applied to resolve any potential occlusion situations. In figure 4.17 the selected window is being pushed to the back of the scene. This results in the sight-line moving through the cluster of un-selected windows and the effect of the ORT is to move these windows away from the neighborhood of the sight-line. Figure 4.18 shows the second and third images of the preVlouS Se(hlenCe, aS R'ell a8 annOtat1011s t0 lndlcate the effect Of the ORT moving windows away from their original locations. New ORTs are introduced over a number of frames, producing a smooth transition between the previous state of the layout and the new layout. If a selection results in the transfer from one object of interest to a second then the original ORT is removed in a similar manner, producing a CT'OSS-fade betWPen the two states of the layout,. Were the layOllt t0 "~ilmp" betWPen states the task of tracking changes in the layout would detract from the principle task of interaction with the desktop environment.
Once a window has been selected as a focus by clicking on it, it remains the object of interest until it is de-selected, either by clicking on another window of clicking over ''empty" space, which de-selects all windows. As long as a window is selected it remains un-occluded as the user navigates through the space or changes the position of CHAPTER 4. APPLICATIONS 77 -r ~ ~ ~._Ll-a ~.:~--f r-~---I-i~!v: ~ ~ "
u5 . ~

. . -t--r t .
i~ds -F
a ~~:~' + ~ A
Via) fib) Figure 4.18: Annotated images from the previous sequence illustrating the initial position (boxes) and movement (arrows) of the selected (solid line) and other (broken line) windows.
the window. As the user drags the selected window behind another group of windows they are temporarily "pushed'' off of the sight-line by the influence of the ORT as seen in the sequence of images in figure 4.19.
The effect of using ORTs on windows in such a 3D environment bears a strong resenrhlance to the use of "Page Avoidance Behavior'' in the Data Mountain (88~ and Task Gallery systems ~90~ which we described in Section 2.x.3.
The actual distribution of layout adjustment in Data Mountain is determined differently from that with ORTs, with each element of the layout in Data Mountain seeking to maintain a minimum separation distance from all other elements in order to avoid situations of occlusion. However the effect of moving a page through a cluster of other elements which avoid it is similar to the effect produced by attaching an ORT to an element and repeating the scenario. We endeavor to convey the effect of this action in figure 4.20. Here the selected page is moved from point a to point CHAPTER 4. APPLICATIO~'VS
(b) Fig~xre 4.19: As the selected window is moved from its initial position in the upper left of the view the cluster of other windows which it passes in front of are dispersed by the action of the ORT attached to the selected window.
b along the indicated vector in figure 4.20(a). The nearby windows are deflected away from their initial positions as the selected window passes by, returning to their initial positions after the selected window has passed. The clusters of arrows indicate the sequence of deflection vectors generated over time as the selected window moves through the scene. The darker arrows are earlier in the sequence, the lighter arrows later. Figure 4.20(b) illustrates the deflected positions of windows midway through the movement of the selected window.
4.2 Contiguous Data Representations Our second classification of information representations in three-dimensions we termed contiguous data representations. ~Ve characterized these representations as having stronger adjacency and containment relationships than the discrete data representa-tions we have just examined. Examples of contiguous data representations are models or parts assemblies in Computer Aided Drafting. Other examples would in-clude surface data derived from volumetric data sets, such as medical imaging data, fluid dynamics, atmospheric or geophysical data.

CHAPTER.4. APPLICATIOlVS 79 (a) (b) Figure 4.20: Annotation of taro frames from the previous sequence. As the selected window moves from position a. to position b the remaining windows arc deflected by the action of the. ORT. The arrow clusters in (a.) indicate the progression of deflection vectors fur the remaining windows. >~:arly to late vectors in the resulting motion are shaded from dark to lighter grey. On the right. (h) illustrates the state of the layout at the midpoint of the sequence. Initial (grey boxes) and Iinal (black boxes) positions of the windows are indicated as well as their resulting displacements (arrows) In such da,tasets the layout is comprised of components which will have physi-cal relationships that may include containment or adjacency. Ln the application of ORTs to these representations it. tray be necessary to take these relationships into account. This may mean animating components of a parts assembly through a partial disassembly sequence before thce parts c:omc under the influence of th<-a displacement of the ORT. While requiring a. somewhat more complex rnudel description, including Borne information about the assernhlage of parts, including c:ontainrnent relationships, which parts must be rcrnoved before others are free to move and su un, the appli-cation of ORTs provides a means of creating an interactive assembly diagram.
In CHAPTER 4. APPLICAT101'VS 80 aW a a 5y~iriii W ilCt eieulCfiiS Vi Vie iiiiiflci wUUiti iii5aaaeiW iiic i,ilCUi~civc~ iii UiLici W
provide clear visual access to a component of interest. As the viewpoint is moved i;iilii'iii'ii=iil, -W iiii'W viiuiii iiloa:-i~cimiuc ti,iW iiiiivi-; iiiii, mi i,i7iiv ii-_ v,iicil erri,;~ociiiiucii,:i 1 1 t~ 1 Y
necessary to provide occlusion-free views of the component of interest.
Increasing and i.. iii i'i -,uc~ ' -~i- i,~ _'_ iii ': iii' ii"vc, i,iif-. iiiici~i~ i-i iii=
-'sue .iiiiii'- i: iiii-(icv.icwi,:~iii~ i,iir. imd~ llLU ~ I i ~ v L .. wvu a , i a i 1 a c v ~
Illg apart and rea65emb1(IIg Itself. PreVlolls work On 'd related Sj'stelTl wa8 presented ' iii, --i--. i-- .. ii '___ii i"-:-t'-,iii ii i , nrW i ' - i:_, iii ~niij. iil i,iiin oy~i,riii a V(:i~ ly v a, ~llc-yr W i~svl F, .y~ r i a~
my u~i,~ a , was described. V~'hile including a level of detail function there was no support for a '-iv ~ii"i.. vviiiic ric itavc iiiV cu iltl iiiiic;ii~cii aiii;ii ii n~tciii it icllid.iii i;ila,ll~c iii vie Y 111 y y v a promising application area for filture work. To date we have only investigated the n,' iit;i5,'iuti iii ~ ~i' i ~ 'ir iirn,iiit Wi' ii niiii- lri "ii iuiiiicin iii,(;i~iii- i;iiui'--ic'X i,iiii-zi,ilitiic=ii-Nl~ ~ V L y' ~ ~.J J , ~f LJ L l, and interconnection relationships between components.
4.2.1 3D Models ''vr, -"''VC iii''ciuciitcii trio a- ~tciiis wili~ii a---~-iv W ii.i~ i,Li iaiill ~iiiicmt-iia,~E'u ila.ta.. l~iic r lla 1 yl y yy i first system is geared towards 3D models consisting of different. parts and we use a n n . '~C, \1 n . 1 \)t t, 1I' YY~~~ \I1 C~~~(L 1"Ilr. 1~~G W .\.tl~~\.1 111\7\1C;1 V1 l it(; W 11( 1(:1111 111111 L111 11 I IiC 11 1111(111 1 I y (1Y
A 1 11 1 11 r 11 1 system applies ORTs i,o surface data derivefi via the Marching-Cubes algorithm (see W iiiili ii' i i1\i ii':iiiVli~i,W zi~a i,iii; iiciivi vii,i~ iu iui, LitiUU
ii i;iliil~ viiiii~i iii Iin~iW
ayy ~ ~ i ~ ~ y ~ ! , Of thlS data t0 rf'Vf'al Underlying elements. TlleSe Ob.le('tS Of lrlterest are eXCPpted from ~llf' CIICL.L Lll. L11C \/11.1 111by1d,l.E'11iC111n.7.
In our first sy stem we apply ORTs to the examination of hones in the skeletal model vi a iiiiiita.ii iuiit d.~ ocCii iii Ii ~~iW E °i.~i. i iii itlui'ci LiVCS iiut iiavc viii-~ L:uiita,ii'iiicili.
b 1 J 1 relationships and the system as implemented is not sufficiently sophisticated to deal "iU ~-E;i-i'- wiyii ttlW tir'in -ii~~ wii iii iEW--'iicr 1ii»i,.»r'iiiiil--sr;'-' r,tii;c;~ tii iicai vvi~ it s"i:ii (J y "Y L L U l~U V ljLi L L U
relationships.
iiy ici,iiiiiiii~ tii ii iii a,iiaiyoin iii uctiiii-iii-i:iiiiti:lit W cvvo i:icn,i,cvi vio, ~ii.i yeW Yriaivi distortion, we can take the magnification producing aspects of transformations to pro-\111 (.G JW(Lllll~ 1J1 IW 1C U.V111yV11C11LJ lLl V1J 111V\1C1J. Y1'G VV111 lCiG1 LV LllGbG L1G111J11J1111GLL11J110 a,s Magnification Producing Transformations (MPTs) and explore them in more detail CHAPTER.4. APPLICATIONS g1 Figure 4.21: The skeletal model of the foot used in the following example.
This model contains 26 separate components and 4204 triangular faces.
shortly.
Applying a, combination of ORTs and '~IPTs we cyan achieve results similar to the transformations presented in the born Illustrator (83~. Ioom Illustrator was dP-signed a.nd irnplcrnenterl as a 3D interactive medical illustration system.
This system incorporated a data-relative detail-in-context magnilic:ation capability derived from the continuous zoom algorithm (3'?, 4~ a5 a means of emphasizing particular compo-nents. For instance if the first metatarsal bone is the current focus of attention then other hones around it are scaled a.nd displaced in errler to provide sufficient spa<;e to inerea,s~ the scale of the bone crf interest,. 'T'hese techniques are described by the authors as being similar to those applied in traditional 2D medical illustration.
Tiy adding viewpoint-aligned ORTs to the model we can select a particular com-ponent of interest and reveal it via the action of the (~R:h. yVP see this illustrated in the sequence of images in figure 4.22. As the viewer rraviga,tcs a,rc.>und the model, all of the remaining comloonents are dynamically defiectec-1 oIT of the sight-line. In this manner a clear view of the selected component is maintained as seen in figure fig-ure 4.23. Figure 4.24 shows the model from the same sequence of viewpoints without CHAPTER 4. APPLICAT101VS 82 ~ ~..w ,' . '~. .r . ~ ~' ~' r.~.
L. '~. * .. ~ ~ ~", ~ 'i (a) (b) (c) Figure 4.22: The external cuneiform bone (circled in (a) and highlighted in all images) is selected as the focus and an URT operator is used to displace the remaining 2~
bones away from the sight-line.
,, i ,~. '~' ~. _.. °-,~ r;.r .,,'. ;~
~ ,_ ~, (a) (b) (~) Figure 4.28: Again the external cuneiform is the object of interest and remains visible in this sequence as the viewpoint moves around the model.
the addition of ()R:I's.
_~ttention emphasis through scaling may be applied in conjunction with the occlu-sion reduction of C)R.'1' operators. In a' manner similar to ~~om Illustra't'or we increase the scale of the component of interest and displace. rather than scale, the.
remain-ing components in order to provide sufficient morn for the increase in scale of the CHAPTER 4. APPLICATIO~'VS 83 r r .. ,~ . "'"n.~.
'~."- .~ .
~ ~~ ~;
(~) (b> (~) Figure 4.24: Again the extcrna.l cuneiform is the object of interest and remains visible;
in this sequence as the viewpoint moves around the model.
component of interest. ~~'e illustrate this combination of actions in figure 4.2~.
There are two potential mechanisms to achieve this scaling. We may ele<;t to scale components in place, simply adjusting the local scaling factor as each component is rendered at its original position. Alternatively we lnay elect to employ the effect of perspc:ctivce elistc>rtion to achieve scaling, using what, we have previously termed a mag-nification producing transformation (MPT). In this technique components are moved towards the viewpoint along the sight-line through their geometric center in order to magnify the component. ~~~e eau also move components away from t,hc. v icw point in order to compress or ara.i~ir,fy them. We illustrate the operation of an MPT
operator on the model of the foot as seen frond a sec°on<iary vlewpomt in figure 4.26. Here we see the model and a representation of the perspective viewing frnstllnl. Tn fig~lrP 4.2~(b) I;he focal component and those nearest. it are moved towards the principle viewpoint producing magnifica.tiou. The degree of nragnificat.ion depends on the ratio of original and final positions relative to the r-axis of the camera coordinate system as outlined in figure 4.'~ 1.
While substantially different ill mechanism from in place se a.ling, the MPT
method produces similar results once perspective. projection has been applied.
Figures 4.28(a) and 4.28(b) show the skelEaal model of the foots in t,wo orientations with no ORTs G"HAPTER 4. APPLICAT101V5' 84 (a) NcrScaling (b) Scaling for Emphasis Figure 4.25: In (a,) no scaling is applied, the effect of the ORT is simply to displace components and reduce occlusion. Tn {b) we have subsequently scaled components ac-cording to their geometric dista,nee from the oh,ject of interest, the Pxterna.l euneiforrrt bone.
applied. In figures 4.3q(a) a,nd 4.3(1(a) hoth an ()ITrI' and a. hlPr1' a,re added to provide occlusion reduction and perspective based scaling of the navieular bone. The images in figures x.31 (a) and 4.31 (h) dernonstratc the same degree of displacement and scaling, hut here the scaling is produced by in place component scaling.
From the principle viewpoint in the perspective projection system them is no apparent motion of the components in a,n iVIPT as they are constrained to move hack and forth along the vector of their original sight-lice.
The most, significant difference between the resulting images, produced by its-place or IVIPT scaling are in cases where aLC~jacent magnifit~d components begin to intersect each other as seers in the detailed view, Fgure 4.32(x). With a IG~1PT, components are separated in depth such intersections are resolved by the magnified components being rendered in front of the compressf.~d or less magnified elements. Partial occlusion of the smaller elements in the overlapping areas is the result as we see in figure 4.32(b).

CHAPTER 4. APPLICATIO~'VS' 85 (a) Perspective Viewing Fmstnm (b) Magnification via. Displacement Figure x.26: Figure (a) illustrates tire basic configuration of the perspective ~-iewirr~
volume and 3D model. Spheres indicate the location of the viewpoint, the view refer-cncc point and the point midway, b<,twcen. Components of the model arc translated along their individual lines of sight in (h) to produce magnification via perspective projection.
The application of tl~Il''I's in conjunction with depth-enhancing stereo vision sup-port, whether v is multiple screens in a head mounted display or more simply by rendering the scene as a red-blue 3I~ anaglyph lea,rls t,o interesting percept.ua.l eI-fect.s. The more magnified objects now appear not only laa~ger but closer than the less magnified components. The efficacy of this approach for producing magnification in exrn.junc'vtion with stereo viewing as well as an invesi,igation of the perceptual effects incurred provides an array of topics for future study.

CHAPTER 4. APPLIC:'ATIO~'VS 86 Figure 4.27: The effect of decreasing the distance d from t=he viewpoint ou projected scale in perspective projection. Final scale varies a.s flue itwerse of the change in distance.
~' , f 'L-L4... '~
ta) ch) Figure 4.28: Side (a) and front (b) vie~~s of the foot model with the navicular bone selected as an e~hject of interest and highlighted. >\io distortion or magnification has been applied and the lone remains all hut completely occluded in these two views.
4.2.2 Isosurface Set Data :'1 sce.ond part of ottr class of r;ontiguous representations includes isosurfacws derived from volumet,rie data. This information is often generated by an algorithm such as G'IIAPTER 4. APPLICATIONS' 87 . r!~;~ ;
i , ," ~ , i , ~~~'t!
,. a.~.., > ~ .a,,. t ' ~~7, i i '~ ~~ r (a) (b) 1~'igllre 1.29: Side (a) and front (b) views of the foot model with the navicular bone SPIPGI.ed aS all Ol)~PC;t Qf lrlt,PrPS( all<I 111b1111g111,P(l. l~iSLOI't.loIl 011Iy l2aS 1)PPIl al)1?Il('d (.O
the layout of the model, with no scaling for emphasis.
~~larching Cubes (70. In many cases these surface e~trat;tion algorithms are applied successively to a data set in order to extract surfaces corresponding to the boundaries of various different components. Then may also be used to extract surfaces from several. spatially coincidt;nt, sets of data.
In medical imaging for example, several passes may- he made to derive separate sets of surface data, for bone, muscle, brain and t,lullor in a, diagnostic cranial ~9R.I
scan. Figures 1.33 and 4.31 illustrate a selection of images from a diagnostic i~ZR.I
scan and an associated lesion-mask. as well as the corresl.)oncling isosurfaccs of skin, brain anti lesion derived via application of the i4larc:hing C;Ilbes 70~
algorithm.
In dealing with this concentric layer occlusion there is no way to disassemble the components in order t,o prov ide clear visual access to the interior fea,t,ures. '1'fIe moss, common approaches to providing access to the interior elements of such structures arc through Chc use of' tra.nsparcncy, colnponcnt removal or cutting planes.
As we have discussed earlier each of these approaches has some. undesirable effects.
either in complicating the perception of the distinct surfaces or in that they remove considerable quantities of information from the display. Applying a modified, discontinuous, URT

G'HAPTEH 4. APPLICATIO~'VS' 88 ~c.~__ (a.) (b) Figurc:~ =1.30: The navicular bone is selected as an object of interest and an ORT is applied to reduce occlusion. Simulta.nE~ously~ a, sma.li degree of ma.gnifica.tion has been applied to emphasize the navictrlar hone and its neighborhood. Magnification here is produced through perspective transformation and as a result the navicular is rendered in front of other bones that may have still resulted in partial occlusion.
to the layers which occlude a component layer of interest in such a. display makes it possible to produce a, viewpoint dcepcndent clear visual bath t,o t,hc region of interest.
_~1 discontinuous ORT operates on the representation at level below that of discrete componcnt,s or !avers, acCing on the individual polygons (triangles) that comprise these surfaces. Triangles are transformed into the local coordinate system of the ORT and the resulting, displaced, locations of its vertices are determiner!.
T)iscontinuous ()RTs are so far limited to plane-relative functions. Triangles which span the source plane of the Function may be split, into conxponenta entirely on one side or the other and re-t,riangulaierl; leading to a clean cot surface. Other less computationally complex sokutions include moving the triangle to the side already containing the ma,.jorit,y oC
vPrt.ices, or leaving i,he pla.nP-spa.nrxing triangles out, of the final image a,lt,oget,her.
For c:xarnple, a linearly-tapered vertical-plane; relative ORT is applied to a. repre-sentation derived from a diagnostic MR.I sc<.zn of <x Multiple Sclerosis patient and a volunxetric map of lesions. Figure 4.:3~ illustrates the rornposition of the 3 layers from figure 4.34 rendered partially transparency in order to make the internal layers visible.

G"HAPTER 4. APPLIC."ATIC)1V5' 89 ,~ ~- . .
' ~ ~~ W - y ~~W ~ ~ 8 .,~, l~) (b) Figure 4.31: The same two views of the human foot with the navicular bone a,s an Elloject of interest in tl7e la~~cntt. Here magnification is pruducP.d via in-place scaling of i:he indiviElnal cotnpunen(a. The Illost a.ppareni, dllferent i5 that, in (b} the interior cuneiform bone now partially occludes the navicular.
Figure 4.36 illustrates the sequential application of an OR~I' to reveal the.
lesion layer;
pushing back the outer brain and skin la~~ers and providing an occlusion-free view of a portion of the lesion mask. 'hhis deformation will automatically follow the viewer as tile viE~wpuint i5 manipulated to e.~arnine the data. from a difl~reflt angle. In these images thE: simplest approach t,o dealing with plane-spanning triangles was taken and they are not. rendered in the final image.
4.3 Continuous Data Representations Y _ _,.i - ti - . ~i-- vv iii W iii'm ii'."5 ii.iiii is vri v:im W ~"i1 (:iiiu"iiiiiiii'-i i.u W ;~si ir, mum LiilwLLia, . ~~ ~ a v ~ uu L ci a a a v eXtra('t Vla nlPthOdS Sll('.h aS mal'('hlllg CllheS. In theSt' ('a.SE',S
dlre('t VOlllme relldPrlllg ~L' 11 ~ 111 ~-1 - '..~C'lt;llt; ~ -_.i =~-t-'--. 1 C1C CLt CL YYI ~C', ltAtl~'~C 1-t llY~. ItIC
v ii,j ~:,i~u Lu ~ a, c ~iic jx a ~yj~ u~L ,u xL L a ~ ~ m 4xt-ods. These algorithms fall into three major categories ha.sed on the method in which they traverse tile object to be rendereEi; image oraer, object order, or a hyurici of the two.

G"FIAPTER, 4. APPLICAT101'VS 90 (a) In-Place Scaling (h) Perspective Sc~~ling Figure 4.32: A detail view of the area.just is front of the na.vicular bone with in-place scaling (a) and perspective scaling (h). The intersection of the external cuneiform a~~cl the third metat,arsa.l in (a) is resolved in (b) by the relative displacement of the componentsin depth.
Inra.gP order DVR methods include re-pre>.jection, and ra.y t,ra.cing of the volume. In re-pro,ject,ion voxel values a.re averaged along parallel rays from each pixel in the view-ing plane. The resulting image resembles an ~-ray. Source-attenuation re-projection assigns a source strengtlc and at ,cnuation cocllicit~nt to each voxcl and allotve<l for obscuring of more distant voxels (96~. R.eprojection is a simple case of ray casting while applying a SI1M operator.
R.ay casting of the volume involves pen°forming an image order traversal of the pixels in the images plane. Rays are cast from the vic;wpoint through each pixel and through the volume (Figure 2.1), the opacities arrcl shaded int,cnsitics encountered arc summed to determine the final opacity and color of t,hc pixel. Rays continue to traverse the volume until the crl>acity en<;ountered by the ray stuns to amity or the ray exits the volume. ~~'hen a, ray intersects a cell between grid points an interpolation may be performed t,o find the value a,t the intersection point (Figure 2.2). R,ay casting, while Cl'U intensive, produces high quality images of the entire data set, not just surfaces as in surface fitting algorithms such as marching cubes.

CHAPTER 4. APPLIG"ATIC)1VS 91 (a) Proton-Density Layer (b) T2 Layer (c) Lesion Mask Figure 4.33: Example source images for the generation of Marching Cubes derived surfaces of MRI data.
(a) Proton-Density Layer (b) T2 Layer (c) Lesion Mask Figure 1.34: Three separate surfaces from diagnostic WR.I data of ~~lultiple-Sclerosis {V1S) lesions. Proton-Density lat-ers (a) rc~vea.l outer surfac°es sue°h as Clue skin, T2 layers (h) reveal neural tissue (brain and ezles), while the lesion mask (c~
indicates location of ?4IS lesions. These three data sets are used in the demonstration of the application of an ORT to volumetric data visualization.
Ray casting of volumes was first, developed as a visible surface algorithm hazy 'fny and Tuy ~106~. Thee first nmlti-valued vclurn Ees «~ere ray-traced by Rlinn in ~lp~
as a means of rendering participatory media. This technique was later extended G"HAPTER 4. APPLIC.."ATIO~'VS 92 Figure 4.35: Composite 4.35 is rendered as slightly transparent in order to make spatial organization apparent.
(a) (b) (c) (d) Figure 4.36: Sequence illustrating the application of an ORT to isosurface data. The lesion mask layer (green) is not affected py the scaled arid truncated planar deforma-tion and 1S revealed as the outer layers are cut and pushed back.
and applied to scientific inforrna.tion and general 3D textures by Kajiya, Kay and Von Herzen (54, 56~. Levoy in (68j presented a method that computed the partial occupancy of a voxel by different rnateria.ls and derived the color and opacity from thc: various contributions.

CHAPTER 4. APPLICATIO~'VS 93 t)lijec;t urileer DVI~. methods aI'c i;liardctul'iLed as procesiug tlri° sciatie in oi'der of the elements of the data set, rather than pixel by pixel (Figure 2.3). The cuberille reticiering a,lgoeit,hrn (41j is a stl'r~,it;hi,forward rii.~,l~liing of voxels to six-sided polyhedra, (cubes). Hidden surfaces are normally removed when rendering with a z-buffer algo-ritlurl (2~j but it is also possililc to iictcrnrine a traversal order drat yiclcis the correct visible. surfaces since a volume ie so strongly spatially sorted. These orderings such as front-tci-Back (42j, bask-tci-frifnfi, (37, 42j and ocaree ha.sed a,liproaches (l'a3j a.II yield a performance benefit. The. blocky appearance of cuberille rendered images can be unproved by shading the culics aceiirdirrg to gr<idicnt information rathi:r than tlrcir geometry.
Splaittirrg (l l i, 118j is au olijc;sa ordi:r apprcinc:lr drat cipi:raitsa 1>y building up an image of the volume in the projection plane. The process is often likened to building up a pictarc luy ilriipping apl~ropria.t;ely ccrloricl slioivlalls reliresen ring the projection of each voxcl. When the snowball hits the plane it splits and spreads the contribution of th a voxel over an area.
~Ve will examine the application of OR,'Ts to two methods of object order volume retldel'irig W lllCli iit,ilile 2 alr<l ilk t.ext,ilre lllalil3llrg, a,;i weii ~"; the 1~)18t1i1111g i,al)al)IlrtIFs in OpenGhT!LT to approximate the process of DVR. The first method we will examine is fast-splitting (29~.
4.3.1 Fast-Split Rendering Fast Splitting is an object orclee approach in which each voxel of data is rendered it place as a small quad (4-sided polygon) which is colored by the volume data. A
normal derived from the gradient of the volume across that voxel triay also be associated with the quad and used in hardware based illumination calculations. The quad used to represent the voxei is lirrther modiliecl by ilsing a. alpha.-iext,ure reap that performs the function of the blending kernel in traditional splitting.
The resulting colored and alpha-mapped quad is rendered into the OpenGL framc-buffer in much the carne way as a traditional split contributes to the final image. The correct performance of this algorithm depends on the volume being traversed from CHAPTER 4. APPLICATIO1V5' 94 Figure 4.37: U~1C head data set rendered via fast-splatting hack to front . Simply determining the axis of the cla.ta, most parallel to the mew-direction and rendering planes perpendicular to that. back to front means that the lolanes and the renriPred quadrangles which comprise i:hem are now within 45degrees of perpendicular to the viewpoint. Figure 4.37 illustrates a data set of a human head, with the top and back of the cranium removed to reveal the outer surface of the brain, rendered with the fast-splitting approach.
hi order to apply an ORT t;o the tla,ta the loca.t,ic>n of the voxels are tsransformed into the space of the C?R.'1' CS. A translation vector is determined and applied to vary the final. rendered, position of the individual qrta,drangles. Effects similar to the discontinuous UIt.'1's discussed in Section 4.2.2 are achieved with the application of linearly a.ttE:nuated plane-relative displacement functions (similar to the:
loner right, images in table 3.~). Such an application is illustrated on the CT data of a human skull in figure 4.38. These functions have the effect of producing a "cut-into and retract'' incision into the interior of the volume. The extent of the incision can be limited or modified by a shaping function to achieve a. more constrained effect as discussed earlier in Section 3.3.
The reason for the use of plane relative incisions here rather than simply employing CHAPTER 4. APPLICATIO~'VS 95 (a) (b) (c) Figure 4.38: The application of a vertical-plane-source ORT to CT data of a human skull rendered via fast splitting. Observe the increase in brightness at the edge of the ORT-induced split. This is the result of split primitives overlapping.
sight-line as we often did with more discrete forms of data. is that the ''real-world"
meaning of a point incision, stretched out. large enough to prtlvi<le any internal visual access is difficult to establish. The interior, or bounding, surface of a line-relative incision would be formed by the intersection of a single ray with the volume data ~,nCl nClt reVeA,l nlll('.h TllPa,111ngf11~ vlRlla.l informa.t,lon.
Conversely, the a,ppll~at,lon of a.
plane-relative displacement function produces incisions which have interior surfaces produced by the intersection of the source plane and the volume data. These cut surfaces carry much more useful information and provide a virtual approximation of a real-world incision.
Examples of the application of such ORTs to volumetric data rendered with the fast-splitting algorithm are seen in the following figures. Figure 4.39(a~ is an image of an OR;I' applied to the llNC head data set. In the next image (higure 4.39(b)) the representation is rotated without updating the viewpoint of the ORT in order to high-lighi, the shape of the ini,eraction of the OIi'I' with the representation.
Figures 4.40(x,) and M.~10(b) are again view aligned and offset. images of another ORT, in this case the extent of the function across the plane is truncated as well as scaling it linearly in CHAPTER 4. APPLICATIO~'VS' 96 (a) View-Aligned (b) Secondary Viewpoint Figure 4.39: A horizontal-plane ORT applied to the UNC Head data set. Tn {a) the ORT is aligned to the viewpoint. In (h) we have moved the viewpoint independent of the ORT (disabled automatic tracking of the viewpoint) in order to illustrate the linear scaling of the application of the (7RT in view-aligned depth. The OR~T
is scaled in depth from the front of the representation Lo the depth of the region of interest.
depth to produce the wedge effect. In figure 4.41 we apply an ORT to the full-color Visible Human female data set.
Them arf~ some visual effects that the simple transformation of the splat-producing quadrangles produces as they are rendered. As quadrangles are ''pushed aside"
to make an incision into the representation they have a tendency to pile-up and overlap more than they did in the original data layout. rl'he effect of this increasing overlap is that in these regions there arc additional contributions to the compositing process achieved with the OpenGL blending mechanism. This results in increased intensity of the color of the volume data. in these regions. Figure 4.38(a) illustrates a CT data set of a skull rendered with the fastsplatting algorithm. Figures 4.38(b) and 4.38(c) demonstrate the appli<;ation of a constrained, plane-relative OR,'1' and makes apparent.
the resulting brightening of the surfaces at the edge of the cut. Anisotropic scaling of the quadrangles in the regions of compression around the ()R:T could be applied to reduce or eliminate this effect.

CHAPTER 4. APPLIC.ATIO1VS 97 (a) View-Aligned (b) Secondary Viewpoint Figure 4.40: The same data set and orientation of view s. Here a shaping curve has been added to control the extent of the ORT operator across the horizontal plane.
4.3.2 3D Texture-Based Rendering The advent of high speed texture mapping hardware has made the application of this technology practical for use as a method of direct volume rendering.
Previous approaches used some hardware-assisted Couraud-shading methods (97, 66~ by calcu-lating projections of volume: regions and then treating them a.s polygons in (coherent, projection).
The possibility of using the rendering hardware of the Silicon Graphics Inc.
Reality Engine is raised by Akely in (1',I. Subsequently a number of papers were presented in rapid succession which all approa,chrad this problem in similar manners (31, 43, 1.3~.
Cullip and Neurnan outline two approaches; described as object-space and image-space (31~, Cuan curl Lipes examine the issues concerning hardware (43j, and Cabral, Cam and Foran describe the ttse of texture mapping hardware to accelerate Radon transformations (1 ~~.
'Vilson, Van Gelder an<i Wilhelms (120j examine the a.pplica,tion of graphics li-brary (OpenGLrn~) routines to automate the process of performing texture space CHAPTER.4. APPLICATI()1'VS 9g (a) Initial (h) View-Aligned (c) Secondary Viewpoint Figure 4.41: The Visible H~-iman Female data set with a plane-relative URT
applied.
Here the C?RT scaled in depth from the front to back of the data set, rather than from the front to the region of interest.
~:
d ,I2 d Figure 4.42: Relation of slice domain to volume data domain.
transformations and setting clipping planes. 'They employ a bounding region of tex-t.urP sampling planes of a size sufficient to a:ccomnzodai.P the texture data volume in any orientation as in figure 4.42. t-Vilson et al. then use the ~Iexture 'Transformation Matrix and the 6 hardware clipping planes of the Reality Engine to render only the ~)~.rtS of thPRP plr~.nPR thfl.t ~.rP Wlthln the Volume fol' a glVPn orlent.atlon. TIIP LISP Of the hardware clipping planes has the advantage of reducing the size of the planes that are rendered. Tt is the pixel-fill rate that greatly slows down the process of rendering G"HAPTER 4. APPLIG'ATIONS' 99 in this situation.
1 )ata is initially converted into a ~l~ texture map with a one-time application of a transfer fmzcaiciu to determine tlro rod green and blue values a.s welt a»s the opacity.
Color and opacity are stored in a 31) texture reap. The: texture snap is applied to marry parallel piilygonal piane~; t~aclf plaric sarnliting a slit:c through the texture. The texture coordinates are specified for the corners of the planes and the texture mapping hardware interpolates the texture coordinates across the plane in three-dimensions.
(a) Object Axis Aligned (b) Viewport Aligned Figure 4.43: Two basic approaches to the alignment of slices in 3D-Texture hardware accelerated volume rendering.
'I'lrere are two means of rotating the polygons to the 3D texture, the data volume.
Either the texture planes rnay be aligned with the principle axes of the data and move (rotatc/tra,nslatc) with the data (Figure =1.~I3(a,)), or the planes may remain aligned parallel to the projection plane and the texture coordinates alone moved (transla.ted/rotate) in order to view t:,he data from a different position (Ii figure 4.43(b)).
In general morn parallel planes sampling the 3D texture will result in a, higher quality image arrd less planes yields higher rendering speeds.
The advantage of this method is drat once a.ll of the data has been downloaded into G'HAPTER 4. APPLICAT101'VS 100 to texture Inemury and the polygons transfIrrned the graphics hardware is capallie of performing all of the slice rendering and the composition of the images.
'I'he clisallvanta,t;es are the re5iricaion to rectilinear vtllllmes and the relatively sliia,li texture sizes that can be accommodated in texture memory at one time. The process of tricking ~br~alcittg tile vuiunle up illtu s:Ilaiiei tehture regiuua a.llll then luaclillg tllelll into texture RANT and rendering them sequentially) permits the use of this technique tti I'eIl ier l~i,rb~r vt)iurlleS 2111(1 a15(I pI'()vl(leS a, Inetli()d i)f (lptIIIIlZIIlg the rendE'i'lllg process, by constructing bricking layouts that eliminate regions of the original data set that al°e empty, thusc° regions a.rc then slut rc;ndcrad arld nu time is lust in computing the texture coordinates and compusiting in rendered pixels where no volume elements arc pl~c;;cut.
Silicon Graphics Inc. has also developed a library of supporting routines to fa-cilitat.e t;lle apt>licatiun of this to<,hnitlue wittl the Ol>enGL' "' grajjhics lik-jrar~~'. The OpenGLTM Volumizer API extends the OpenGL set of primitives to include points, lines. triangles and now teta°~~s (tetrahedrons). Five tetral~eclron~;
form a minimal tessellation of a cube and tetrahedrons are able to tessellate any 3D shape.
Thus irI
order t.ci r~Iidel' a.ny a,rk>itral°ily defiined regican cff :>T~
~,~lunie data the vulunlP rendering pipeline need only be able to render the tetrahedral primitive.
!~,~~P~l~a~llsI~1 In order to provide a means of producing ORTs which result in the apparent cutting into actions we saw in sec°tion 4.3.1 we must provide a division of the texture sampling surfaces which provides additional vertices in the regions where displacements will uc.cur. t:)ur initiak a,pprliac;il here was to find a. method of fitting a mesh to a function that described the shape of the displacement function.
Tlle; Illctlulcl few .luisiltrupie; nlcsh gmncr<lt;iim pr<;sn~Itcei in ~I lj pruviill;s al nle;aus of producing a t,essella.tion of the plane that fits the geometry of the triangulation to a space defined by the Ilessian of the function. The I-Iessian is the matrix of the second I OpenGLl~'~a version 1.2 will contain 3D texture coordinate generation as a core part of the 9PI.
in 1.1 it is available only on machines supporting the 3D textures as an extension.

G"HAPTER 4. APPLICATIO1VS 101 Figure 4.44: 2D Gaussian Function f (x) = c~-lo.oa2-io.oy2 ~ 't t, t t t l l l l t t~ t l l' l ' '' ' t 1 r r r I l l' l' ~\ '~ t r r r l J' '~ '' ~ ~ . r . ~ r r t '' w , , . / J
t 'w '~ ~ , . . > > l 'l 1 . l l t ~. t ~ , , , . , . . ~ r . . l J' , i t pG r > > t ~ _ v r .

.._, ~ . ~ v s ~ .-_ . . ~ 1 i r _ _ . - .. _ _ 9.5 - .
_ p,5. _ _ x ~ v ~ _ _ , - ~ _ ~. v ., -.
. _ ~ . ._ . ~ r r t ~ ~ _ . _ .- .-- i f ~ v -. --.
_ . _ r r .- . n r t v _ . i t v v r" r r . . ~ t v . . -. ~.
.. . , p.~t . ~. 1.
-f ./ / . . r a , , . ~ 1 r ~ , . ~. 1r.

r r ~ . , , ~ v ~ 'W,1.
, , .~ .~ ~ r ~ ~ ~ ~ \ 1 1.
.! .~ i v v L 1, ~, r .~ d .~ ,!l 1 1 1 > ~ 1.
~! ~ 1 d i >

-rt).tl az-10.0?/z Figure 4.45: Hessian of 2D Gaussian Function f (x) = a partial differentials of a function. For instance the Hessian of the three-dimensional gaussian function f (~;) _ ~, - lo.o ~'-IO.o~z figure 4.44, is presented in figure 4.45. Using the methods presented in ~12~ we were able to generate meshes which provide a region c>f increaksed detail around a point of interest and conform to the shape of the Heseian of a gaussian function centered at that, point. as in figure figure ~1.~16.
The method of rendering volumes using 3D texture mapping hardware will opti-mally see the texture sampling planes rotated to remain perpendicular to the view direction. We can use this fact to our advantage in the generation of ORTs with this method. If we create a single tessellated mesh which provides the desired geometrical CHAPTER.4. APPLICATIONS 1~2 li'igure 4.46: Anisotropic mesh aligned to Hessian of Gaussian function.
(a) Object-Axis Aligned (b) Sightline Aligned Figure 4.4 i : Sampling planes aligned to data space a.Yis (a.) or centered on sight-line (h) detail for manipulation of the volume, rather than simply lining all of these planes up, centering them on the view direction vector through the volume, we can instead position the planes so that they are centered on the sight-line through our point. of interest. We see these two alternatives in figure 4.47.

CHAPTER 4. APPLICATIO~'VS 103 Another interestin g optimi~atic~n i5 possible helv that was not possible in the previous method of applying OR~I's to fast-splatted volumetric data.. If we determine t;ht; lllilxllrlilrrl dt',f<irlrliLtIiH1 Of on(7 f)IiLIIi; by t,lrt; ~JIiT, fl't)nl 1,1375 wl; C:iLII tl(:1'IVf', thf' stiLtt' of all of the remaining texture sampling planes. The, state of any plane can be derived Ily interpulating between the state of the initial plane alum the maximally tlefolnned plane to the appropriate state. In this manner we may produce functions that are eonstal~t, trunt;ated or tapered with varying depth in the ()RT coordinate system, all from the state of two texture sampling planes.
Figure 4.48: Configuration of tessellated plane and hidden texture surface used in demonstrating stretch approach to ()li.'I' application.
~s with the fast: splatting method of rendering vohlmes, here too the application of a linear-source function is questionable. The first issue arises around introducing a synrlnetric belle into the tessellated surface and the subsequent rearrangement of texture coordinates to accommodate the hole as it grows. Again the result would be a, tube, the inner boundary of which would simply be the series of voxels that. a given ray intersected on its path through the volume. This would convey little meaningful information. One possibility solution we have explored is what, we term the eolorPd balloon approach to introducing a hole into the tessellated surface. In this ease tri-angles whose edges are stretched by the deformation have their contribution to the G'HAPTER 4. APPLIG'ATIO~'VS 104 (a) (b) (c) (d) Figure 4.49: Progressive application of deformation and resulting transparency effect.
As triangles arc stret,chcd they arc made progressively less opaque. The result is that.
in the area of the deformation the background layer becomes visible.
l (a) (b) (c) (d) Figure 4.50: Detail view illustrating the transition of opacity values at. the boundary of the deformation which results in the blurry appearance.
composition operation reduced by decreasing their alpha proportionally. Thus trian-gles that are stretched become increasingly transparent. To illustrate this mechanism we set up a tessellated plane in fI'Orrt of a background image as seen in figure 4.48.
The result of decreasing the alpha of the stretctced triangles in the mesh is illustrate<.l in figure 1.49 with the edges in the mesh rendered and in figure 4.50 with the edges removed. The most significant problems with this method are that it. results in an inner boundary to the distortion which is fuzzy and indistinct and that it results in the removal of Borne information from the display. If information removal was <leerned acceptable then the trimming of the tessellation to produce a hole would result in G'HAPTER 4. APPLIC"ATIO~'VS 105 clearer imaging of the interii>r Boundary of the deforrrla.tiun and the result would be something like a CSG operation removing a sub-volume of data from the final im-agc;. This wcrntd ;till l~avc~ tlrc; advantage; that the rmniwc;d rc;gions would trac;k the;
viewpoint as the representation is manipulated and the regions of interest, are moved within the volume.
Rather than deal with a linear-source deformation we will concentrate now on the application of plane-relative def<.3rn~ations of the volume data set. As with fast-splatting these deformations produce the appearance of an incision into and retraction of ma,tcria,l in the representation. i~io data is rcmuvcd in this rirarirti:r it is nicrcly pushed aside in order to produce interior visual access aligned to the current viewpoint.
Figure 4.51: The initial configuration of the slice sampling mesh.
Triangulation den-sity is increased in the inside corner where ORT displacements will occur.
'This min-imizes the extent of linear interpolation of texture coordinates.
~Vloving from a linear-source to a plane-source we can modify the way in which we arrange the texture sampling planes, replacing each single plane with four dirarter-planes. These four planes covering the four quarters of the original plane. We will also change the tessellation pattern to provide increased geometrical detail in the inner corner of the cluartcr-plane which will be a.d,jacent to the line of sight through the region of interest; in this revised scheme. We see this configuration in figure 4.3.2.
All of the Benefits of reduced computation of deformations remain true for the use of quarter planes. In fact computation of vertex deformations now need only be CHAPTER 4. APPLICAT101'VS 106 (a) (b) (c) (d) Figure 4.52: Introduction of a semi-circular deformation of the texture sampling mesh by deforming vertices along the y axis.
(a) (b) (c) (d) Figure 1.53: IVIirroring the single deformed texture sample plane allows the crca.tion of a closed empty region in the middle of the plane.
performed for the maximum deformation of one quarter plane, the remaining three being rendered by reflection of the first quadrant as we seen in figure 4.53.
Texture sampling coordinates are also determined for only one plane and the remaining three planes obtain the correct texture cx>ordinatcs by manipulation of the OpcnGLT'VI
texture coordinate transformation matrix. This use of geometrical transformations allows for the generation of an entire slice from a mesh covering only one quadrant.
As described in Section 3.3 we have the capability of modifying the profile of the ()RT in depth and in the plane perpendicular to the view direction. 'This allows us to produce an (_)IlT i,hat is spa.i,ia,lly co nst,rained across the field of view and using a hemispherical profile as a shaping envelope produce ORTs that resemble incision CHAPTER 4. APPLICATIO~'VS 107 and retraction operations on the volume data. We illustrate this in figures 4_52(a) through 4.52(d). The inner boundary of the ORa surface is formed by the intersection Of the nl~~I' SOilrCe plane Wlth the Voliime data. The ii8e Of (hlartered textllre Sampling planes means that there is a pre-defined break in the geometry at the location of this intersection and this obviates the complexity of dynamically re-triangulating the texture sampling planes.

z ~, a ~
F
f i j : .r.
f ~ ' ~~ ' ' ~
' ~
t ~
i p J c ~n1 ~~s p ~ L t~.~~ 5'~
~ y1 ~ ~ " s r i n Y
1 '~ J,F~1F~~
~ "
a r'~ ~
t ' ~
~ ~

i ? t i: ,~5;:
t ~ ~.'v ,L:~..~' 7~ ~'' 'li.
;T ' s '~,.
~

j w r i:~{ ~.. .. ...
~, t t e ri .. ,~iVF'Y:"..

(h) (r) Figure 4.54: OpenGI: clipping planes are used to trim the texture planes to the boundaries of the volume presentation space Having developed a method for the construction of suitable geometrical sampling surfaces it is necessary to integrate these polygonal primitives with the 3D
texture data in order to produce the volume rendered image. A point of interest may be located anywhere within the bounds of the volume data, and the volume data may be oriented arbitrarily with respect to the viewer. Our method of centering the texture sampling planes on the line of sight from the viewer and through the point of interest means that the sampling planes must be scaled sufficiently large enough to encompass the data volume in the most extreme combinations of point of interest position and orientation. This means a point of interest in one corner of the data and an orientation of the data, with rotations of 45 degrees in two axes to the viewer, such that a vertex of the data volume points towards the viewer. In this configuration the projection of the data. volume is maximised in width, height and depth. The dimension of a, single texture sampling plane must then be the maximum diagonal across the data set, and the stack of texture sampling planes must span that' same distance in depth.

CHAPTER.4. APPLICATIO~'VS 1~g In general this means that a large portion of the area of each texture sampling plane falls outside of the boundaries of the data volume. R.a,ther than wasting computational effort performing pixel-fill operations in these areas we apply clipping planes (rotated appropriately to account for the orientation of the viewer and data volume) to trim the stack of sampling planes to the bounds of the data volume in a manner similar to that employed in (i2fl). Figure 4.54 illustrates the clipping of the tessellated texture sampling planes and their rotation perpendicular to the viewer. In figure 4.o4(d) we see the addition of the effect of an ORT function to the planes.
(a) (b) (~) (d) Figure 4.55: Progressive application of ORT to produce a horizontal, shaped, opening in a single plane in a volumetric representation.
(a) (b) (c) (d) Figure 4.56: Progressive application of ORT to produce a vertical, shaped, opening in a single plane in a volumetric representation.

CHAPTER 4. APPLICATIONS
(a) (b) (c) (d) Figure 4.57: Increasing the width of the shaping function to enlarge the horizontal ORT in a single slice of a volumetric data set.
(a) (b) (c) (d) Figure 4.58: Texture transformation matrix is manipulated so that as the intersec-tion of the sampling planes is moved across the presentation space the texture space remains stationary.
The texture coordinates for each of vertex in a given texture sampling plane are computed based on the position and orientation of the plane within the data volume.
These coordinates are determined for the vertices in their original, un-deformed, con-figuration. These same coordinates are used in the application of texture to the de-formed planes resulting from the application of an ORT operator. The result is that the data from the original position of the vertex is pulled to the deformed position.
Rather than explicitly deforming each of the volume data elements as with fast splat-ting we are able to achieve an interpolated result. between the vertices of each element of the triangular mesh. Appropriate application of ORT operators and modification of CHAPTER 4. APPLICATIO~'VS 110 the texture transformation matrices allows for the creation of horizontal (Figure 4.55) or vertical (Figure 4.56) bounded (Figure 4.57) or unbounded plane-relative incisions into the volume data. Furthermore the movement of the point of interest is accom-plished by the movement of the textured plane and the counter translation of the texture coordinates, maintaining the volume data position (Figure 4.58).
Figure 4.59: The Visible Human Male data set rendered via 3D-texture slicing.
~Ve will illustrate the effect of ORT functions on the head of the V isible Human male data set, shown in its initial state in figure 4.59. Examples of the application of a bounded, linearly-truncated, plane-relative ORT are shown in figures 4.60(x,) and 4.60(b). In figures 4.61(a) through 4.61(c) the head is rotated in place while a horizontal ORT function provides visual access to an area behind and between the eyes.
The next set of examples employ the UNC head data set; figures 4.62(a) through 4.62(c) illustrate the application of an ORT to the UNC head data set. In an oblique presentation (Figure 4.62(a)) both a vertically aligned and horizontally aligned ORT
are demonstrated. Arbitrary orientations between horizontal could be obtained by CHAPTER 4. APPLICATIONS 111 (a) (h) Figure 4.60: The application of a, horizontal ORT to the Visible Human Male data set. The point of interest. is behind the left eye and the effect of the ORT
is to reveal two cut-surfaces aligned to the viewpoint without the removal of data.
(a) (b) (c) Figure 4.61: A more centrally located point of interest is specified in the Visible Human Male data set an<i the viewpoint is moved around the head from the front to the left side.
rotating the up vector used in the construction of the ORT coordinate system and rotating the texture sampling planes around the sightline to accommodate the new configuration.

CHAPTER 4. APPLICATIO~'VS 112 ~'~~ i i (a) (b) (c) Figure 4.62: The UNC Hcad CT data set with vertically and horizontally aligned ORT functions applied to reveal cut surfaces aligned to the current viewpoint.
Of course the method we have described here applies only to a single region of interest in the volume representation and corresponding ORT. Having a single ORT
source means that we can arrange the texture sampling planes along that source by shearing their positions to center them on the sight-line through the point of interest.
To extend the system and to provide support for multiple regions of interest and ORTs we must abandon some of the efficiencies we have employed. Since the location of the intersection of multiple ORTs with each successive texture sampling plane would diverge as we moved away from the viewer, we can not employ a single tessellation of these plane8 that prOVldeS addltlOnal get)metrlCal detail in specifically the right, places in all planes. Rather a compromise solution of sufficient detail throughout the plain would be desirable. Each of these planes would have to by dynamically intersected by the ORT source planes and cut and re-triangulated at the line of intersection.
While a great deal more computation is required at run time such a system remains plausible as an area of future work. Interestingly the fast-splitting method requires no such extension to account for multiple ORTs, since it is essentially a very dense example of the same methods that are applied to render discrete models.

CHAPTER 4. APPLICATIO~'VS 113 4.3.3 Temporally Sequential 2D Information Another source of 3D information is the change in a 2D information layout through time. A 3D layout of such information is possible when temporal sequence is employed as one of the spatial axes in a manner similar to that demonstrated in (3~.
Figure 4.63 is an example of such 2D-over-time data arranged to form a 3D cube. VTe have employed this method in the Tardis system (21~ for the display and exploration of spatio-temporal landscape data generated by the SELES (Spatially Explicit Landscape Event Simulator) engine (34~.
/ /
i ~"Y,~
SPa~(Y) i Space(Y) Pace(x) Space(x) Space(z) Time(t) Figure 4.63: Arrangement of spatio-temporal data as a 3-dimensional cube by using a spatial axis to reprcscnt timc.
One of the metaphors employed in Tardis for interaction with such 2D-over-time information is that of a flip-book. The data is presented as a cube, where 2 axes represent space and the third time. By cracking the cube open perpendicular to OIle Of these aXPS two lntPTlOr fares are rPVPalPd~ repreSentlng adjacent SIrCPS through the data. We see the result of such an operation in figure 4.64. If the cube is split perpendicular to the temporal axis then the faces display the state of two spatial dimensions across a step in tune at the position of the split. If the cube is split across one of the spatial axes then the changes along a line across the landscape through time are revealed.
An operator derived from the ORT can be applied to the interaction of a user with this display metaphor in order to maintain the visibility of the open pages of the CHAPTER 4. APPLICAT101VS 114 (a) (h) Figure 4.64: A block of spatio-temporal landscape data and an ORT operator applied to reveal the state of the landscape at an instant in time.
book. The application of an ORT means that each of the two faces will remain visible to from the viewpoint during manipulation of the split position or navigation of the viewpoint as we sce in figure 4.65. Adjusting the position of the opening reveals a new point in time or space; while the viewpoint may be repositioned in order to obtain a clearer view of one face by orienting it perpendicular to the viewer.
_._ __ __ - ~ _____ _.___ _ __ i '____.__ i i t i i i i ~ i i ~ i i i i D
Figure 4.65: Positioning a split in a data-cube (left), applying an ORT
operator to reveal two internal faces (middle left), repositioning the viewpoint to obtain a more perpendicular view of the right face (middle right) and finally selecting a new point in at which to position the split.

CHAPTER 4. APPLICATIO1VS 115 The ORT operator takes into account the position and orientation of the data cube, the opening in the cube, and the position of the viewpoint. We modify the ORT so that the splitting plane across the cube forms the source of the ORT
operator, regardless of the relative position of the viewpoint. A line from the intersection of this plane with the far side of the datacube to the viewer becomes a tool for the determination of the degree to which to apply the ORT function. If the viewpoint lies on the plane splitting the cube a relatively small degree of distortion reveals the two inner faces of the split. If the viewpoint lies away from the splitting plane then the degree of the ORT function is increased such that this sightline lies between the two open facie.
Figure 4.66: Operation of the book mode ORT with the hardcover appearance.
Two modes of operation are possible in this book-like configuration of an ORT.
VVe identify these modes with their similarity to manner in which hand- and softcover books behave. In operating as a softcover book the two sections of the cube formed by the split are sheared away from the viewpoint and the near faces of these halves may become compressed, the far face of the cube remains planar. In operating as a hardcover book the two sections of the cube are rotated about the intersection of the splitting plane with the far side of the cube as seen in figure 4.66. In this case the two sections are not sheared. their near faces do not compress and the far face is broken into two across the bend.

CHAPTER 4. APPLICATIO~'VS 116 In each mode the relative sizes of the two sections produced by the split provides information about the relative position within the dataset in a manner that is familiar to us from our experiences with physical hooks. Animating the turning Of pages as the position of the split is adjusted may also be employed to further support the book rnetaphor. Browsing through such a structure by moving the splitting plane supports tasks such as examining the structural changes in a landscape over time.
4.4 Discussion This work presents a new framework with which to describe transformations on a data layout. The effect of these transformations on a layout is distinct from a change of the layout itself. Supporting the perception of these transforTrlations as such will be an important aspect in their effective application.
As with 2D layout adjustment approaches, an understanding of the effect these operators have on a structure can be supported in a number of ways. If the structure is initially very regular (for example the 9x9x9 grid graph in section 4.1.1) then the effeft Of the ORT On the layout is readily apparent. even in a single still Image. If the structure of the data is more random (for example one of the molecular models in section 4.1.2) then the effect of the adjustment performed by the ORT may not be so readily apparent. In these situations the addition of a secondary, more regular, structure to the presentation may aid in the perception of the distinct effect of the ORT. In section 4.1.2 we did not deflect the path of the bonds in the molecular models.
Bending these otherwise straight edges under the influence of the ORT also provides some additional clues as to the role of the layout adjustment operator on the original structure.
ORT operators support constrained layout adJllStmentS WhlCh leave SirbBtantlal parts of the original data layout intact. F~lrther properties such as color, scale and orientation of components remain invariant under the effect of an ORT. Other prop-erties of groups of components such as co-planarity may not be preserved, although maintenance of orthogonal ordering is supported.

CHAPTER 4. APPLICATIONS 117 Comprehension of these distorted layouts may be supported by a number of dif ferent mechanisms and properties of the distortions themselves. As in 3DPS
these distortions support both the concepts of reversibility and revertability as described by Piaget (81~. R.evertability is the understanding that two states are related and that one can effect manipulations to move from one to the other and back while reversibil-ity is the idea that two states are in some way equivalent. The ability to move between the original and distorted states of the the layout is an important aspect in supporting understanding through these mechanisms. The fact that the adjustment of the layout is spatially constrained and that as the viewpoint moves different regions enter and exit the area of this influence further supports the perception of revertability.
This ability to move the viewpoint or re-orient the layout leads to the generation of motion fields through the movement of individual features of the structure.
The interaction of the ORT with the initial layout overlays a second set of motion vec-tors. These additional motion cues surround the area of interest but do not affect the actual object of interest, at the source of the ORT. This isolation of the focal object in a secondary motion field may serve to further emphasize the location of the object of interest. An important, area of fixture work will be to conduct studies of the fundamental aspects of perception and comprehension in interacting with these operators.

Chapter 5 Conclusion The application of 3D computer graphics to information presentation is a field that continues to evolve and diverge rapidly. We have examined the field of detail-in-context displays for 2D information representations, and their extension to 3D
infor-mation spaces. Vie saw that these techniques do not deal directly with the problem of occlusion of objects of interest which occurs in 3D representations. We have also seen that previous approaches to reducing occlusion in 3D do not produce detail-in-context results. We. have presented a layout adjustment approach to creating 3D detail-in-context views, derived from 2D oriented techniques, hut accounting for the unique challenges of 3D.
Since the concepts in this work were first presented in (27~ we have seen related results in the work of a number of other researchers; notably discontinuous ray deflec-tors (62~, and page avoidance (88~. While differing in their underlying mechanisms, these techniques seek to produce similar results to those we have seen in the appli-cation of our own ORTs to volume data and 3D document spaces. What we have accomplished here is to construct a framework within with we can describe. the oper-ation of ORTs as well as related systems.

CHAPTER 5. CONCL USIOI~' 119 5.1 Contribution The most significant concept we hope to have brought forward is the.
consideration of the sight-line of an object of interest in creating 3D detail-in-context views. The phenomenon of occlusion presents a challenge specific to 3D representations.
In order for detail-in-context tools to be truly effective in dealing with 3D
representations occlusion of the objects of interest must be dealt with. ~Ve believe that our solution, the maintenance of clear sight-lines to the object of interest through operators which are inherently viewer-aligned in their description is a solution that provides an novel and elegant approach, which extends readily to application across a wide range of application domains and representation styles.
5.2 Future Work This work represents a beginning. There remain significant challenges and opportu-nities for the future. Some of the most significant. challenges involve the creation of intuitive user interfaces for systems employing ORTe in visualizing and interacting with 3D representations. If this can be accomplished it will facilitate the study of the 118e Of these OperatOrS In 3D interaction, and hopefully point towards the use of ORT-like mechanisms in many areas of 3D visualization.
Our earlier work in the creation of detail-in-context viewing tools for 2D
data pre-sented significant challenges in developing meaningful metaphors for direct interaction of users with such pliable surfaces. While moving a lens around an information space by clicking-and-dragging is intuitive, affordances for specification and adjustment of other parameters of these lenses (degree of magnification, focal and contextual extent, lens shape adjustment) remain open problems.
The challenges of providing affordances for the specification and adjustment of OItT operators through direct manipulation is an equally challenging problem.
Progress in this area will be necessary in order to move us to a point where we can begin an in-depth examination of the interactions of users with operators such as these. ~Ve CHAPTER 5. COIVCLL'SIOlVr 12O
take encouragement from the apparent success of related tools such as the page avoid-ance aspect of Data Mountain, and hope that experience will be repeated in testing of ORTs with even more complicated SVStemS SIlCh aS VOhlmetrlC
reprPBentatIOriS and 3D models.
The use of ORT operators in volume visualization applications, especially medical imaging, will require further study and development in conjunction with the domain users and experts. It remains to be seen if users such as radiologists will accept ORTs as an alternative to methods such as sequential slice presentation and traditional cutting plane operations.
The challenge of applying ORT operators to 3D parts assemblies remains an in-triguing area for more development. The problems of collision detection and the inclusion of (dis)assembly sequence information in model representations all appear to be solvable. The end result of an interactive assembly diagram presents an at-tractive goal. A similar system for the interactive exploration of complex protein structures is equally intriguing.
5.3 Final Thought Cyberspace, the abstract realm of information representation within the computer, is a space where abstractions and interactions with information are possible including those that we could never experience in the "real world" . Our exploration of the space of possibilities results in many methods that are readily comprehensible; familiar mappings of real world operations. Nlore abstract, creative, exploratory designs must continue to point towards techniques that are new and novel in order that we may discover the full potential of this medium.

Claims

CA002317336A 2000-09-06 2000-09-06 Occlusion resolution operators for three-dimensional detail-in-context Abandoned CA2317336A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
CA002317336A CA2317336A1 (en) 2000-09-06 2000-09-06 Occlusion resolution operators for three-dimensional detail-in-context
PCT/CA2001/001256 WO2002021437A2 (en) 2000-09-06 2001-09-06 3d occlusion reducing transformation
AU2001289450A AU2001289450A1 (en) 2000-09-06 2001-09-06 Occlusion reducing transformations for three-dimensional detail-in-context viewing
EP01969103A EP1316071A2 (en) 2000-09-06 2001-09-06 3d occlusion reducing transformation
CA002421378A CA2421378A1 (en) 2000-09-06 2001-09-06 3d occlusion reducing transformation
JP2002525572A JP4774187B2 (en) 2000-09-06 2001-09-06 Occlusion reduction transform for observing details in a three-dimensional context
US09/946,806 US6798412B2 (en) 2000-09-06 2001-09-06 Occlusion reducing transformations for three-dimensional detail-in-context viewing
US10/884,978 US7280105B2 (en) 2000-09-06 2004-07-07 Occlusion reducing transformations for three-dimensional detail-in-context viewing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002317336A CA2317336A1 (en) 2000-09-06 2000-09-06 Occlusion resolution operators for three-dimensional detail-in-context

Publications (1)

Publication Number Publication Date
CA2317336A1 true CA2317336A1 (en) 2002-03-06

Family

ID=4167024

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002317336A Abandoned CA2317336A1 (en) 2000-09-06 2000-09-06 Occlusion resolution operators for three-dimensional detail-in-context

Country Status (6)

Country Link
US (2) US6798412B2 (en)
EP (1) EP1316071A2 (en)
JP (1) JP4774187B2 (en)
AU (1) AU2001289450A1 (en)
CA (1) CA2317336A1 (en)
WO (1) WO2002021437A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006089417A1 (en) * 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
CN109766069A (en) * 2019-01-15 2019-05-17 京东方科技集团股份有限公司 Auxiliary display method, device, electronic equipment and computer readable storage medium

Families Citing this family (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6895126B2 (en) 2000-10-06 2005-05-17 Enrico Di Bernardo System and method for creating, storing, and utilizing composite images of a geographic location
CA2328795A1 (en) 2000-12-19 2002-06-19 Advanced Numerical Methods Ltd. Applications and performance enhancements for detail-in-context viewing technology
US8416266B2 (en) 2001-05-03 2013-04-09 Noregin Assetts N.V., L.L.C. Interacting with detail-in-context presentations
CA2345803A1 (en) 2001-05-03 2002-11-03 Idelix Software Inc. User interface elements for pliable display technology implementations
US7213214B2 (en) 2001-06-12 2007-05-01 Idelix Software Inc. Graphical user interface with zoom for detail-in-context presentations
US7084886B2 (en) 2002-07-16 2006-08-01 Idelix Software Inc. Using detail-in-context lenses for accurate digital image cropping and measurement
US9760235B2 (en) 2001-06-12 2017-09-12 Callahan Cellular L.L.C. Lens-defined adjustment of displays
US7123263B2 (en) * 2001-08-14 2006-10-17 Pulse Entertainment, Inc. Automatic 3D modeling system and method
CA2361341A1 (en) 2001-11-07 2003-05-07 Idelix Software Inc. Use of detail-in-context presentation on stereoscopically paired images
AU2002343688A1 (en) 2001-11-15 2003-06-10 Nintendo Software Technology Corporation System and method of simulating and imaging realistic water surface
US7251622B2 (en) * 2002-01-16 2007-07-31 Hong Fu Jin Precision Ind. (Shenzhen) Co., Ltd. System and method for searching for information on inventory with virtual warehouses
CA2370752A1 (en) 2002-02-05 2003-08-05 Idelix Software Inc. Fast rendering of pyramid lens distorted raster images
US7296239B2 (en) * 2002-03-04 2007-11-13 Siemens Corporate Research, Inc. System GUI for identification and synchronized display of object-correspondence in CT volume image sets
AU2003226357A1 (en) * 2002-04-10 2003-10-27 Imagine Xd, Inc. System and method for visualizing data
US7250951B1 (en) * 2002-04-10 2007-07-31 Peter Hurley System and method for visualizing data
US8120624B2 (en) 2002-07-16 2012-02-21 Noregin Assets N.V. L.L.C. Detail-in-context lenses for digital image cropping, measurement and online maps
CA2393887A1 (en) 2002-07-17 2004-01-17 Idelix Software Inc. Enhancements to user interface for detail-in-context data presentation
CA2406047A1 (en) 2002-09-30 2004-03-30 Ali Solehdin A graphical user interface for digital media and network portals using detail-in-context lenses
CA2449888A1 (en) 2003-11-17 2005-05-17 Idelix Software Inc. Navigating large images using detail-in-context fisheye rendering techniques
CA2411898A1 (en) 2002-11-15 2004-05-15 Idelix Software Inc. A method and system for controlling access to detail-in-context presentations
US7698665B2 (en) * 2003-04-06 2010-04-13 Luminescent Technologies, Inc. Systems, masks, and methods for manufacturable masks using a functional representation of polygon pattern
US7480889B2 (en) * 2003-04-06 2009-01-20 Luminescent Technologies, Inc. Optimized photomasks for photolithography
US7124394B1 (en) * 2003-04-06 2006-10-17 Luminescent Technologies, Inc. Method for time-evolving rectilinear contours representing photo masks
US7037231B2 (en) * 2004-03-08 2006-05-02 Borgwarner, Inc. Variable biasing differential
US7486302B2 (en) 2004-04-14 2009-02-03 Noregin Assets N.V., L.L.C. Fisheye lens graphical user interfaces
US20050237336A1 (en) * 2004-04-23 2005-10-27 Jens Guhring Method and system for multi-object volumetric data visualization
US8106927B2 (en) 2004-05-28 2012-01-31 Noregin Assets N.V., L.L.C. Graphical user interfaces and occlusion prevention for fisheye lenses with line segment foci
US9317945B2 (en) 2004-06-23 2016-04-19 Callahan Cellular L.L.C. Detail-in-context lenses for navigation
US7601121B2 (en) * 2004-07-12 2009-10-13 Siemens Medical Solutions Usa, Inc. Volume rendering quality adaptations for ultrasound imaging
US20060022979A1 (en) * 2004-07-27 2006-02-02 Jonathan Sevy Method and apparatus for presenting information with varying levels of detail
US7714859B2 (en) 2004-09-03 2010-05-11 Shoemaker Garth B D Occlusion reduction and magnification for multidimensional data presentations
US7995078B2 (en) 2004-09-29 2011-08-09 Noregin Assets, N.V., L.L.C. Compound lenses for multi-source data presentation
WO2006099490A1 (en) * 2005-03-15 2006-09-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer program products for processing three-dimensional image data to render an image from a viewpoint within or beyond an occluding region of the image data
US7580036B2 (en) 2005-04-13 2009-08-25 Catherine Montagnese Detail-in-context terrain displacement algorithm with optimizations
US20060271378A1 (en) * 2005-05-25 2006-11-30 Day Andrew P System and method for designing a medical care facility
US7532214B2 (en) * 2005-05-25 2009-05-12 Spectra Ab Automated medical image visualization using volume rendering with local histograms
JP2007066291A (en) * 2005-08-02 2007-03-15 Seiko Epson Corp Method, apparatus and system for image display, server, program, and recording medium
EP1925020A4 (en) * 2005-09-13 2014-01-01 Luminescent Technologies Inc Systems, masks, and methods for photolithography
US7921385B2 (en) * 2005-10-03 2011-04-05 Luminescent Technologies Inc. Mask-pattern determination using topology types
JP5061273B2 (en) * 2005-10-03 2012-10-31 新世代株式会社 Image generation device, texture mapping device, image processing device, and texture storage method
WO2007041602A2 (en) * 2005-10-03 2007-04-12 Luminescent Technologies, Inc. Lithography verification using guard bands
US7793253B2 (en) * 2005-10-04 2010-09-07 Luminescent Technologies, Inc. Mask-patterns including intentional breaks
US7840032B2 (en) * 2005-10-04 2010-11-23 Microsoft Corporation Street-side maps and paths
US7703049B2 (en) 2005-10-06 2010-04-20 Luminescent Technologies, Inc. System, masks, and methods for photomasks optimized with approximate and accurate merit functions
US8031206B2 (en) 2005-10-12 2011-10-04 Noregin Assets N.V., L.L.C. Method and system for generating pyramid fisheye lens detail-in-context presentations
JP4201207B2 (en) * 2005-11-21 2008-12-24 株式会社バンダイナムコゲームス Program, information storage medium, and image generation system
JP4566120B2 (en) * 2005-11-21 2010-10-20 株式会社バンダイナムコゲームス Program, information storage medium, and image generation system
US7509588B2 (en) 2005-12-30 2009-03-24 Apple Inc. Portable electronic device with interface reconfiguration mode
US7893940B2 (en) * 2006-03-31 2011-02-22 Calgary Scientific Inc. Super resolution contextual close-up visualization of volumetric data
US7983473B2 (en) 2006-04-11 2011-07-19 Noregin Assets, N.V., L.L.C. Transparency adjustment of a presentation
US20070247647A1 (en) * 2006-04-21 2007-10-25 Daniel Pettigrew 3D lut techniques for color correcting images
US8022964B2 (en) * 2006-04-21 2011-09-20 Apple Inc. 3D histogram and other user interface elements for color correcting images
US7693341B2 (en) * 2006-04-21 2010-04-06 Apple Inc. Workflows for color correcting images
US8041129B2 (en) * 2006-05-16 2011-10-18 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US7778445B2 (en) * 2006-06-07 2010-08-17 Honeywell International Inc. Method and system for the detection of removed objects in video images
US8560047B2 (en) 2006-06-16 2013-10-15 Board Of Regents Of The University Of Nebraska Method and apparatus for computer aided surgery
US8085990B2 (en) * 2006-07-28 2011-12-27 Microsoft Corporation Hybrid maps with embedded street-side images
GB2440562A (en) 2006-08-03 2008-02-06 Sony Uk Ltd Apparatus and method of data organisation
US7675517B2 (en) * 2006-08-03 2010-03-09 Siemens Medical Solutions Usa, Inc. Systems and methods of gradient assisted volume rendering
US20080043020A1 (en) * 2006-08-18 2008-02-21 Microsoft Corporation User interface for viewing street side imagery
US7957601B2 (en) * 2006-08-30 2011-06-07 Siemens Medical Solutions Usa, Inc. Systems and methods of inter-frame compression
US10313505B2 (en) 2006-09-06 2019-06-04 Apple Inc. Portable multifunction device, method, and graphical user interface for configuring and displaying widgets
KR101257849B1 (en) * 2006-09-29 2013-04-30 삼성전자주식회사 Method and Apparatus for rendering 3D graphic objects, and Method and Apparatus to minimize rendering objects for the same
US20080088621A1 (en) * 2006-10-11 2008-04-17 Jean-Jacques Grimaud Follower method for three dimensional images
US8483462B2 (en) * 2006-11-03 2013-07-09 Siemens Medical Solutions Usa, Inc. Object centric data reformation with application to rib visualization
US7830381B2 (en) * 2006-12-21 2010-11-09 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
US11315307B1 (en) 2006-12-28 2022-04-26 Tipping Point Medical Images, Llc Method and apparatus for performing rotating viewpoints using a head display unit
US11228753B1 (en) 2006-12-28 2022-01-18 Robert Edwin Douglas Method and apparatus for performing stereoscopic zooming on a head display unit
US10795457B2 (en) 2006-12-28 2020-10-06 D3D Technologies, Inc. Interactive 3D cursor
US11275242B1 (en) 2006-12-28 2022-03-15 Tipping Point Medical Images, Llc Method and apparatus for performing stereoscopic rotation of a volume on a head display unit
US8519964B2 (en) 2007-01-07 2013-08-27 Apple Inc. Portable multifunction device, method, and graphical user interface supporting user navigations of graphical objects on a touch screen display
US8233008B2 (en) * 2007-01-19 2012-07-31 Honeywell International Inc. Method and system for distinctively displaying selected floor with sufficient details in a three-dimensional building model
US8253736B2 (en) * 2007-01-29 2012-08-28 Microsoft Corporation Reducing occlusions in oblique views
US7724254B1 (en) * 2007-03-12 2010-05-25 Nvidia Corporation ISO-surface tesselation of a volumetric description
US7961187B2 (en) * 2007-03-20 2011-06-14 The University Of North Carolina Methods, systems, and computer readable media for flexible occlusion rendering
DE102008012307A1 (en) * 2007-03-27 2008-10-02 Denso Corp., Kariya display device
WO2008147999A1 (en) * 2007-05-25 2008-12-04 Pixar Shear displacement depth of field
US8432396B2 (en) * 2007-06-08 2013-04-30 Apple Inc. Reflections in a multidimensional user interface environment
US8375312B2 (en) * 2007-06-08 2013-02-12 Apple Inc. Classifying digital media based on content
US9026938B2 (en) 2007-07-26 2015-05-05 Noregin Assets N.V., L.L.C. Dynamic detail-in-context user interface for application access and content access on electronic displays
US8619038B2 (en) 2007-09-04 2013-12-31 Apple Inc. Editing interface
DE602008006212D1 (en) * 2007-09-13 2011-05-26 Philips Intellectual Property PATH APPROXIMATE RENDERING
GB2460411B (en) * 2008-05-27 2012-08-08 Simpleware Ltd Image processing method
US8508550B1 (en) * 2008-06-10 2013-08-13 Pixar Selective rendering of objects
JP4636141B2 (en) * 2008-08-28 2011-02-23 ソニー株式会社 Information processing apparatus and method, and program
WO2010056132A1 (en) 2008-11-15 2010-05-20 Business Intelligence Solutions Safe B.V. Improved data visualization methods
US8423916B2 (en) * 2008-11-20 2013-04-16 Canon Kabushiki Kaisha Information processing apparatus, processing method thereof, and computer-readable storage medium
JP5470861B2 (en) * 2009-01-09 2014-04-16 ソニー株式会社 Display device and display method
FR2941319B1 (en) * 2009-01-20 2011-05-27 Alcatel Lucent CARTOGRAPHIC DISPLAY METHOD.
DE102009042326A1 (en) * 2009-09-21 2011-06-01 Siemens Aktiengesellschaft Interactively changing the appearance of an object represented by volume rendering
US10007393B2 (en) * 2010-01-19 2018-06-26 Apple Inc. 3D view of file structure
US9232670B2 (en) 2010-02-02 2016-01-05 Apple Inc. Protection and assembly of outer glass surfaces of an electronic device housing
US8423911B2 (en) 2010-04-07 2013-04-16 Apple Inc. Device, method, and graphical user interface for managing folders
US10788976B2 (en) 2010-04-07 2020-09-29 Apple Inc. Device, method, and graphical user interface for managing folders with multiple pages
US9124488B2 (en) * 2010-04-21 2015-09-01 Vmware, Inc. Method and apparatus for visualizing the health of datacenter objects
US8725476B1 (en) * 2010-05-04 2014-05-13 Lucasfilm Entertainment Company Ltd. Applying details in a simulation
JP5627498B2 (en) * 2010-07-08 2014-11-19 株式会社東芝 Stereo image generating apparatus and method
US8659598B1 (en) * 2010-10-28 2014-02-25 Lucasfilm Entertainment Company Ltd. Adjusting navigable areas of a virtual scene
US8928661B2 (en) 2011-02-23 2015-01-06 Adobe Systems Incorporated Representing a field over a triangular mesh
JP5628083B2 (en) * 2011-04-13 2014-11-19 株式会社日立製作所 Computer system and assembly animation generation method
EP2515137A1 (en) * 2011-04-19 2012-10-24 UMC Utrecht Holding B.V. Processing a dataset representing a plurality of pathways in a three-dimensional space
US8970592B1 (en) 2011-04-19 2015-03-03 Lucasfilm Entertainment Company LLC Simulating an arbitrary number of particles
US9498231B2 (en) 2011-06-27 2016-11-22 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US11911117B2 (en) 2011-06-27 2024-02-27 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
CA2840397A1 (en) 2011-06-27 2013-04-11 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US8681181B2 (en) 2011-08-24 2014-03-25 Nokia Corporation Methods, apparatuses, and computer program products for compression of visual space for facilitating the display of content
US8913300B2 (en) * 2011-10-04 2014-12-16 Google Inc. Occlusion of vector image data
US8754885B1 (en) * 2012-03-15 2014-06-17 Google Inc. Street-level zooming with asymmetrical frustum
US20130257692A1 (en) 2012-04-02 2013-10-03 Atheer, Inc. Method and apparatus for ego-centric 3d human computer interface
RU2604674C2 (en) 2012-06-27 2016-12-10 Лэндмарк Графикс Корпорейшн Systems and methods for creation of three-dimensional textured satin
US9098516B2 (en) * 2012-07-18 2015-08-04 DS Zodiac, Inc. Multi-dimensional file system
US10241638B2 (en) 2012-11-02 2019-03-26 Atheer, Inc. Method and apparatus for a three dimensional interface
US10105149B2 (en) 2013-03-15 2018-10-23 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
WO2014185920A1 (en) 2013-05-16 2014-11-20 Empire Technology Development, Llc Three dimensional user interface in augmented reality
GB2515510B (en) 2013-06-25 2019-12-25 Synopsys Inc Image processing method
US10663609B2 (en) 2013-09-30 2020-05-26 Saudi Arabian Oil Company Combining multiple geophysical attributes using extended quantization
US10318100B2 (en) * 2013-10-16 2019-06-11 Atheer, Inc. Method and apparatus for addressing obstruction in an interface
EP3063608B1 (en) 2013-10-30 2020-02-12 Apple Inc. Displaying relevant user interface objects
KR102315351B1 (en) * 2014-10-07 2021-10-20 삼성메디슨 주식회사 Imaging apparatus and controlling method of the same
US9488481B2 (en) 2014-10-14 2016-11-08 General Electric Company Map presentation for multi-floor buildings
EP3018633A1 (en) * 2014-11-04 2016-05-11 Siemens Aktiengesellschaft Method for visually highlighting spatial structures
US20160267714A1 (en) * 2015-03-12 2016-09-15 LAFORGE Optical, Inc. Apparatus and Method for Mutli-Layered Graphical User Interface for Use in Mediated Reality
US10347052B2 (en) * 2015-11-18 2019-07-09 Adobe Inc. Color-based geometric feature enhancement for 3D models
US10466854B2 (en) * 2016-06-10 2019-11-05 Hexagon Technology Center Gmbh Systems and methods for accessing visually obscured elements of a three-dimensional model
DK201670595A1 (en) 2016-06-11 2018-01-22 Apple Inc Configuring context-specific user interfaces
US11816325B2 (en) 2016-06-12 2023-11-14 Apple Inc. Application shortcuts for carplay
US10762715B2 (en) * 2016-10-21 2020-09-01 Sony Interactive Entertainment Inc. Information processing apparatus
EP3389265A1 (en) * 2017-04-13 2018-10-17 Ultra-D Coöperatief U.A. Efficient implementation of joint bilateral filter
EP3404620A1 (en) * 2017-05-15 2018-11-21 Ecole Nationale de l'Aviation Civile Selective display in an environment defined by a data set
US11675476B2 (en) 2019-05-05 2023-06-13 Apple Inc. User interfaces for widgets
US10891766B1 (en) * 2019-09-04 2021-01-12 Google Llc Artistic representation of digital data
US11210865B2 (en) 2019-10-03 2021-12-28 International Business Machines Corporation Visually interacting with three dimensional data in augmented or virtual reality
US20230118522A1 (en) * 2021-10-20 2023-04-20 Nvidia Corporation Maintaining neighboring contextual awareness with zoom

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751289A (en) * 1992-10-01 1998-05-12 University Corporation For Atmospheric Research Virtual reality imaging system with image replay
US5432895A (en) * 1992-10-01 1995-07-11 University Corporation For Atmospheric Research Virtual reality imaging system
US5689628A (en) * 1994-04-14 1997-11-18 Xerox Corporation Coupling a display object to a viewpoint in a navigable workspace
AU2424295A (en) * 1994-04-21 1995-11-16 Sandia Corporation Multi-dimensional user oriented synthetic environment
JP3798469B2 (en) * 1996-04-26 2006-07-19 パイオニア株式会社 Navigation device
US6204850B1 (en) * 1997-05-30 2001-03-20 Daniel R. Green Scaleable camera model for the navigation and display of information structures using nested, bounded 3D coordinate spaces
US6160553A (en) 1998-09-14 2000-12-12 Microsoft Corporation Methods, apparatus and data structures for providing a user interface, which exploits spatial memory in three-dimensions, to objects and in which object occlusion is avoided
US6842175B1 (en) * 1999-04-22 2005-01-11 Fraunhofer Usa, Inc. Tools for interacting with virtual environments
US6741730B2 (en) * 2001-08-10 2004-05-25 Visiongate, Inc. Method and apparatus for three-dimensional imaging in the fourier domain

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006089417A1 (en) * 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
CN109766069A (en) * 2019-01-15 2019-05-17 京东方科技集团股份有限公司 Auxiliary display method, device, electronic equipment and computer readable storage medium
CN109766069B (en) * 2019-01-15 2023-05-12 高创(苏州)电子有限公司 Auxiliary display method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
JP2004507854A (en) 2004-03-11
US6798412B2 (en) 2004-09-28
US20020122038A1 (en) 2002-09-05
WO2002021437A2 (en) 2002-03-14
JP4774187B2 (en) 2011-09-14
WO2002021437A3 (en) 2002-05-30
EP1316071A2 (en) 2003-06-04
US20040257375A1 (en) 2004-12-23
US7280105B2 (en) 2007-10-09
AU2001289450A1 (en) 2002-03-22

Similar Documents

Publication Publication Date Title
CA2317336A1 (en) Occlusion resolution operators for three-dimensional detail-in-context
Kwon et al. A study of layout, rendering, and interaction methods for immersive graph visualization
Carpendale et al. 3-dimensional pliable surfaces: For the effective presentation of visual information
Wang et al. The magic volume lens: An interactive focus+ context technique for volume rendering
Bowman et al. Information-rich virtual environments: theory, tools, and research agenda
Carpendale et al. Extending distortion viewing from 2D to 3D
Wann et al. What does virtual reality NEED?: human factors issues in the design of three-dimensional computer environments
Martinez et al. Molecular graphics: bridging structural biologists and computer scientists
Dorsey et al. The mental canvas: A tool for conceptual architectural design and analysis
Sonnet et al. Integrating expanding annotations with a 3D explosion probe
Kraak Computer-assisted cartographical 3D imaging techniques
Chen et al. Manipulation, display, and analysis of three-dimensional biological images
Cowperthwaite Occlusion resolution operators for three-dimensional detail-in-context.
Brosz et al. Single camera flexible projection
Penaranda et al. Real-time correction of panoramic images using hyperbolic Möbius transformations
Jarrett et al. Exploring and interrogating astrophysical data in virtual reality
Ratti et al. PHOXEL-SPACE: an interface for exploring volumetric data with physical voxels
Nowinski et al. A new presentation and exploration of human cerebral vasculature correlated with surface and sectional neuroanatomy
Nakao et al. Adaptive proxy geometry for direct volume manipulation
Kolesár et al. A fractional cartesian composition model for semi-spatial comparative visualization design
Beckhaus Dynamic potential fields for guided exploration in virtual environments
Waltemate et al. Membrane Mapping: Combining Mesoscopic and Molecular Cell Visualization.
Bruckner et al. Illustrative focus+ context approaches in interactive volume visualization
Li et al. Virtual retractor: An interactive data exploration system using physically based deformation
Jentner et al. DeepClouds: Stereoscopic 3D Wordle based on Conical Spirals

Legal Events

Date Code Title Description
FZDE Discontinued
FZDE Discontinued

Effective date: 20030828