WO2002005217A1 - A virtual surgery system with force feedback - Google Patents

A virtual surgery system with force feedback Download PDF

Info

Publication number
WO2002005217A1
WO2002005217A1 PCT/SG2000/000101 SG0000101W WO0205217A1 WO 2002005217 A1 WO2002005217 A1 WO 2002005217A1 SG 0000101 W SG0000101 W SG 0000101W WO 0205217 A1 WO0205217 A1 WO 0205217A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
objects
model
interface device
simulated
Prior art date
Application number
PCT/SG2000/000101
Other languages
French (fr)
Inventor
Chee-Kong Chui
Hua Wei
Yaoping Wang
Wieslaw L. Nowinski
Original Assignee
Kent Ridge Digital Labs
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kent Ridge Digital Labs filed Critical Kent Ridge Digital Labs
Priority to PCT/SG2000/000101 priority Critical patent/WO2002005217A1/en
Priority to US10/332,429 priority patent/US7236618B1/en
Publication of WO2002005217A1 publication Critical patent/WO2002005217A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/28Force feedback

Definitions

  • the invention relates to a novel means and method of extracting and editing three-dimensional objects from volume images utilising multimedia interfaces.
  • the means and method is particularly suited to the extraction of objects from volume images produced by Magnetic Resonance Imaging (MRI), Computed Tomography (CT) or ultrasound equipment.
  • MRI Magnetic Resonance Imaging
  • CT Computed Tomography
  • ultrasound equipment
  • volume data consisting of a stack of two-dimensional images
  • the extraction of three-dimensional images from volume data consisting of a stack of two-dimensional images is attracting significant attention in light of the general recognition of the likely impact that computer integrated surgical systems and technology will have in the future.
  • Computer assisted surgical planning and computer assisted surgical execution systems accurately perform optimised patient specific treatment plans.
  • the input data to these types of systems are volume data usually obtained fforri tomographic imaging techniques.
  • the objects and structures embedded in the volume data represent physical structures. In many instances, the medical practitioner is required to feel the patient in the diagnosis process.
  • the proposal includes a recursive search method of the cross section of tree structure objects.
  • the reconstruction of a three-dimensional volume of the vessel structure has been demonstrated in less than 10 minutes after the acquisition of a rotational image.
  • the volume rendered three-dimensional image offers high quality views compared with results of other three-dimensional imaging modalities when applied to high contrast vessel data.
  • VHD Visible Human male Data
  • HCI Human Computer Interaction
  • the present invention provides a method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the association of physical properties with identified objects said properties including at least visual and haptic properties of the identified objects; and incorporating said identified objects and associated physical properties into a system including at least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting with the simulated three-dimensional objects, or any part thereof, said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the least one haptic interface device in accordance with user requests.
  • ancillary visual information is presented to the user.
  • the method includes the association of audible properties with identified objects, or parts thereof, and the system includes at least one audio interface device.
  • the method may also provide ancillary audio information to the user during the user's interaction with simulated three-dimensional objects.
  • haptic interface devices include the facility to receive signals from the system representing associated haptic properties of simulated objects.
  • the interaction between the various interface devices of the system is co-ordinated such that the correct registration of physical properties with the identified objects is maintained for the various interfaces during interaction between the simulated objects and the human user.
  • the method may also include the step of editing the representation of a simulated object in order to refine the representation that was originally derived from volume images.
  • the step of editing the representation of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user.
  • the step of editing the representation of a simulated object may also include the use of an audio interface device by the user during the editing process.
  • the method includes the generation of a discrete model of identified objects derived from volume images zm ⁇ the atep of editing the representation of a simulated object in udes the application of s ⁇ interpoiative scheme to produi e a continuous rnodel from the . discrete mode! derived from volume images.
  • the method a ⁇ so includes an iterative process whereby an edited version of an object is compared with a ptavious discrete model of the object determine the difference between the editati version and the previous discrete model of the object and w ether the diff ⁇ e e® is within an acceptable limit, the edited version being converted into a iscrel ⁇ model for the purpose determining the difference with the previous d ⁇ scretf model.
  • the present invention also provides a system for intes acting with simulated three-dimensional objects, the system including representations of three-dimensional objects identified from volume images, the representations including physical properties associated with each of the objects relating to at least visual and haptic properties of the objects, and at least one visual interface device and at least one haptic interface device enabling a user to interact with a simulated three-dimensional object the interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
  • the system includes the provision of ancillary visual information to a user. It is also preferred that the system associates audible properties with identified objects and also includes an audio interface device and the generation of signals by the system for transmission to the audio, interface device in accordance with the audio properties associated with the objects ⁇ It is also preferred, that the 5. ... systeo? inelude.the provision of ancillary audio information to a; user • v •
  • the haptic' interface device is capable of . receiving signals from the system corresponding to the associated haptic ; . properties, for simulated objects, the signals preferably conveying a haptic 0 . , sensation to the user as a result of interacting with the objects ⁇ . .
  • the system may also include a voice recognition facility capable of receiving a ⁇ d interpreting spoken requests from a user thus enabling the user to issi ; commands to the system without necessitating the use of one or both of th ⁇ , 5 users hands. This is particularly advantageous in simulating medical procedures, wherein a surgeon usually uses both hands and issues spoken requests to,, assistants,
  • the system include the capability to co-ordinate the 0 interaction between the various interface devices such that the correct registration of physical properties with the identified objects is maintained for the various system interfaces during interaction between the simulated objects and a human user.
  • the present invention provides a method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the development of a model to represent those objects; the association of physical properties with identified objects in the 0 developed model said properties including at least visual and haptic properties of the identified objects; and incorporating said model and associated physical properties into a system including at. least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting .
  • the method includes the step of editing the model of a simulated object in order to refine the model of the simulated object derived from volume images.
  • the methqd includes the use of the haptic and visual interface devices by the user.
  • the method may also make use of an audio interface device by a user.
  • the method preferably includes the derivation of a discrete model from volume images and the step of editing the model of a simulated object include the application of an interpolative scheme to produce a continuous model from the discrete model derived from volume images.
  • the method includes an iterative process whereby an edited version of a model is compared with a previous discrete model of the object to determine the difference between the edited version and the previous discrete model of the object and whether the difference is within an acceptable limit, the edited version being converted into a discrete model for the purpose determining the difference with the previous discrete model.
  • the model generated as part of the method is a potential field model representation of identified objects.
  • the model may also include a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties with that model.
  • the present invention provides a system for interacting with simulated three-dimensional objects including a model of three-dimensional objects identified from volume images, said model including physical properties associated with each of the objects relating to at least visual and haptic properties thereof, and at least one visual and haptic interface device enabling a user to interact with a simulated three-dimensional object said interaction including the generation of signals by the system for transmission to at least the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
  • the system includes audible properties associated with the identified objects and also includes an audio interface device and the generation of signals for transmission to the audio interface device in accordance with the audio properties associated with objects.
  • the system includes a haptic interface device capable of receiving signals from the system corresponding to the associated haptic properties for a simulated object, or part thereof, the signals preferably conveying a haptic sensation to the user as a result of interacting with the object.
  • the system include a voice recognition facility capable of receiving and interpreting spoken requests of a user thus enabling the user to issue commands to the system without necessitating the use of one or both of the users hands.
  • the model incorporated in the system is a potential field model representation of identified objects.
  • the model may also include a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties with that model. Description of the Preferred Embodiment
  • FIG. 1 illustrates a highly abstracted architectural representation of the IOEES
  • FIG. 2 shows a representative implementation of an IOEES
  • FIG. 3 represents an overview of the basic components involved during the implementation of the method of IOEES
  • Figures 4a and 4b show diagrammatic representations of a suggested implementation of an IOEES where a user is seated at a workstation with various interface device available to them for interacting with simulated three- dimensional objects;
  • Figure 5 illustrates a stack of two dimensional images as obtained from a scan of a patient
  • Figures 6a and 6b illustrate a simple transformation function used in the development of a potential field model
  • Figure 7 illustrates the creation process of a potential field
  • Figures 8a, 8b, 8c, 8d and 8e show various shapes and configurations that may be used for probes
  • Figure 9 illustrates a suggested implementation of an object manipulator for use in an IOEES
  • Figures 10a, 10b and 10c illustrate a suggested implementation of a force feedback glove for use in an IOEES
  • Figure 11 illustrates a possible scenario involving the use of two force feedback gloves, a probe and a spherical object
  • Figure 12 illustrates a highly abstracted architectural representation of the Interaction Manager of the IOEES
  • Figure 13 shows a hierarchical representation of the functions involved in the transmission and reception of commands in the IOEES
  • Figure 14 shows the preferred process for the calculation of resistance based upon a potential field model for a force feedback controller
  • Figures 15a and 15b illustrate the operation of the Explorer attempting to develop a path along a list of points
  • Figure 16 illustrates the primary functions involved in the refinement of a continuous model of three-dimensional objects obtained from volume images and a first discrete model thereof
  • Figure 17 illustrates the steps involved in an example operation of the IOEES in extracting and editing a three-dimensional object from volume data.
  • Figure 1 illustrates an architectural representation of the IOEES depicting visual, audio and haptic interfaces as part of a multimedia workstation that is serving the user. The registration and coordination of these interfaces is controlled by the Interaction Manager.
  • the Interaction Manager interacts with the Editor and Extractor modules to provide a continuous flow of haptic, visual and audio feedback to the user during the object extraction and editing processes.
  • the main source of data used in this system is sets of volume images. However, 2D and 3D atlas or other geometric or physical data could be used to enhance the extraction and editing processes.
  • Figure 2 shows an implementation of IOEES where the user wears a headset and holds a pair of haptic tools, in this instance a force feedback probe and an object manipulator.
  • the computer system displays a rendered volume in plain or stereo 3D.
  • FIG. 3 is an overview of the method used in this IOEES.
  • the system polls data inputs from both haptic and audio means. These inputs will generally represent the volume object being edited or a selected region of an object being extracted.
  • the inputs may initiate responses from the computer to the user through the visual, haptic or audio means.
  • FIGS. 4a and 4b show diagrammatic representations a suggested implementation of an IOEES workstation.
  • the workstation in an IOEES includes a computer system that has sufficient graphic capabilities to render a set of volumetric images in plain or stereo 3D.
  • the workstation in Figure 4a also includes a 3D-interaction tool with force feedback, referred to as a force feedback probe, that enables a user to pick and move a point or an object in the 3D space of the set of volumetric images displayed on the computer monitor.
  • Figure 4a also includes a hand held device, referred to as an object manipulator that enables a user to perform zooming, 3D rotation and translation of the volumetric object in the virtual space.
  • the workstation also includes a speaker and a microphone, possibly in the form of headset to provide audio interaction between user and the computer.
  • FIG. 4b An alternative workstation configuration is depicted in Figure 4b where the workstation replaces the force feedback probe of Figure 4a with a pair of instrumented gloves.
  • Figure 4b also depicts a workstation that includes a holoscope in place of the stereo emitter and glasses, and the headset with a more conventional microphone and speaker arrangement.
  • Volume images are generally provided in the form of a stack of two-dimensional images in the axial direction of an object that is to be considered.
  • a diagrammatic representation is shown in Figure 5 where a stack of images are generated correlating with two dimensional slices taken generally along the longitudinal axis of a patient.
  • Volumetric images serve as the source of input to the method of the present invention.
  • Virtually any scanner can produce suitable axial images, or can at least produce images that can easily be converted to axial images.
  • rotational CT scanners capture patient data in the form of projection images. From these images a method known as Back Projection technique can be used to construct volume images.
  • volume rendering techniques such as ray casting and projection techniques have traditionally been used in the visualization of volume images. Advances in computer hardware and software have greatly improved the speed and accuracy of such visualization.
  • a potential field model is used as the data source for force computation and audio generation.
  • the potential field is defined as a volume of force vectors.
  • the input to the process for creating a potential field model could be the output of two-dimensional images from a CTA scanner.
  • the data will contain both vascular and bone structures and to differentiate between these two types of structure, the process includes an analysis of the intensity of the pixels forming the two-dimensional images. This enables the derivation of outlines in the two-dimensional images corresponding to the vascular and bone structures.
  • the different structures are determined by an analysis of the intensity of each voxel forming the volume image.
  • the method preferably first derives force vectors for that plane, the force vectors on each plane being independent of the other planes in the volume. A collection of planes of force vectors contributes to the establishment of a potential field.
  • the construction of a potential field from volume data is illustrated in Figures 6a and 6b.
  • a relatively straightforward transformation function may be used to associate a black pixel with a value of 1 , a white pixel that is not bordering any black pixel with a value of 0, a white pixel that is bordering a black pixel with a vector of magnitude 0.5 and direction given in Figure 6b, and a value of 0.5 in an ambiguous case.
  • the force vectors resulting from this step of the method is referred to as a potential field since the force vectors define the tendency of moving from values of low to high potential similar to an electrostatic potential field.
  • FIG. 7 provides a more detailed illustration of a preferred process of creating a potential field model.
  • the input to the creation process is multi-modality volume data, which refers to data that consists of one or more groups of volume data with different imaging modalities. This is likely as a patient may be subjected to MRA scanning in addition to CT scanning.
  • a voxel is the smallest unit cube of a volume and an inter-voxel similarity comparison is used to determine whether a voxel belongs to a particular object.
  • Local edge detection determines the boundary area of the regions and the classification process labels the object regions.
  • each structure or object embedded in the volume data the regions containing the object are labeled based upon information including the image intensity, the intensity gradient between pixels and information from the material properties library.
  • the material properties library includes information regarding the mapping function of physical properties based upon voxel intensities.
  • each structure or object embedded in the volume data is labeled as being of a particular type such as bone or soft tissue.
  • Another process referred to as "Model Guidance”, is optional and makes use of a reference model or atlas and can be used to guide the three-dimensional region of interest and hence more accurately label object regions. After an object has been identified and labeled, the physical properties of the regions are assigned.
  • the object-shortest-distance is defined as the shortest distance between a voxel and the regions defining the object. It is determined for each voxel in the volume and the potential value is then determined based upon a polynomial combination of intensity, physical properties and object-shortest-distance.
  • the relationship determining the potential value for each voxel is as follows:
  • Function f(e) represents the mechanical properties of tissue point P.
  • the structure is nonlinear elastic or visco-elastic.
  • ⁇ and ⁇ are stress and strain vectors
  • D is a constitutive matrix.
  • the matrix D is constant for a given position.
  • the equation is occasionally referred to as Hooke's generalized law. In a homogeneous case, only two elastic parameters, Young's modulus E and Poisson's ration v, are present.
  • the mass density is also a component part in this function.
  • Function g represents the image intensity or the gray value of Point P. It is also related to the scanning mechanism, the scanning technique and other associated geometric information. Function g is used for image processing but in some cases, its value may also relate to mechanical properties, in particular, the mass density. In its simplest form, Function g equates to the image intensity.
  • the proposed potential field can be quantised with respect to a grid space.
  • the potential components are pre-calculated and stored in a defined data structure.
  • the computation of a given point P(x,y,z) will be implemented using a linear-cubic interpolation such as:
  • N,0 ' 1,2,...,8) is the rth shape function which can be expressed as: ⁇ - C + ff.Xl + W.Xl + gg,) 8
  • ⁇ , ⁇ and ⁇ are coordinates in a parametric space.
  • the potential field model provides a basis for the establishment of force relationships such that realistic forces providing resistance to the passage of the tool can be fed back to the user by way of a haptic interface.
  • the force vectors which are stored as part of the potential field model, describe the direction and magnitude of force that should be conveyed as force feedback to a user for various points in the model. Since the potential field is established based on object properties and their spatial relationships, which is related to the "minimum distance to an object", such force feedback can provide realistic sensation with respect to an object's shape and "feel".
  • the same volume of force vectors can also be used to associate and generate audio signals based on some form of relationship between the object and the device or tool interacting with the object. For example, as the tool approaches an object, an audio signal could be generated to signal a potential collision. The sound upon collision and possible damage to the object could be readily adapted to the potential field. In addition, suitable audio messages could also be activated to alert the user to particular situations.
  • the association of sounds with various points in the model provides the user with a collaborative sensation of the object at a particular point.
  • the use of a potential field model in the preferred embodiment provides additional information or dimension with respect to revealing the details of an anatomical structure. However, other information such as distances between objects and other parameters may also be presented on the visual display.
  • Audio interfacing between the user and the system enables the implementation of a voice command and control application thereby allowing the user to more closely concentrate on the task at hand.
  • audible commands are the primary method of communication between a surgeon and his assistants.
  • the computer system executing the model may become the assistant.
  • the incorporation of a voice recognition facility would also enable a user to issue commands to the computer.
  • the user may be navigating to a branch point which is of particular importance.
  • the system may prompt the user by way of synthesized speech warning that a special condition is about to occur.
  • An audio interface is particularly useful for reducing the need for visual cues to a user which would have the effect of crowding the visual display device which will be providing the user with simulated visual representations of the various objects.
  • the system may include a microphone, the audio input preferably being digitised and matched with actions corresponding to the spoken commands.
  • the voice commands could conceivably relate to requests for various tools. It is expected that tasks such as the fine adjustment of object manipulation would be performed by, or with, assistants since the user's hands are generally occupied.
  • haptic interface is used in this specification to refer to all forms of interface between a user and a system that involves the sense of touch.
  • a device referred to as an instrumented glove to provide a user with tactile responses.
  • Other devices that may be used include force feedback probes and three-dimensional object manipulators.
  • gloves have been used previously in virtual reality systems the usual form of an instrumented glove is an output device only.
  • the system also includes a force feedback probe.
  • a force feedback probe should have six degrees of freedom and provide a user with the ability to select a planar image and a point on the image with force feedback to restrict the hand movement within the area of interest.
  • a force feedback probe include user activated switches or buttons to allow users to implement combinations of actions.
  • Figure 8a details a typical force feedback probe that is considered suitable for the present invention and Figures 8b to 8e represent a range of alternative shapes and configurations for force feedback probes.
  • switching between different visual representations of a force feedback probe could be effected in a manner similar to the practice in a surgical procedure when a surgeon instructs an assistant to provide him with a specific tool.
  • Figure 9 shows a preferred implementation of an object manipulator.
  • an object manipulator is provided which can also make use force feedback signals generated by the system.
  • the user may depress a button or activate a switch to move the handle of the object manipulator.
  • the object manipulator is also provided with a scroll wheel at the side of the handle to implement scaling functions. It is expected that the object manipulator would be capable of receiving spoken commands from a user. It is also expected that the user would be able to fine tune object manipulation by spoken commands.
  • FIG. 10a to 10c shows various views of the preferred configuration for a force feedback glove.
  • the glove preferably has both an input and an output capability.
  • a magnetic tracker may be attached to the centre of the palm of the glove which may be considered by the system to be the rotational centre of the systems representation of the users hand.
  • the palm of the glove includes a switch or button, the activation of which is effected by closing the fingers of the users hand.
  • the force feedback controller pulls or releases these cables to effect a sensation upon the wearer of the glove in accordance with the constraint required in light of the users interaction with simulated objects.
  • the Force feedback controller manages the force feedback cables to enable user to feel the gesture in the manipulation. Cables are joined using mechanical joints that have a micro motor attached. When the user is holding on to a solid object or a tool, the composite joints restraint the hand from moving beyond the shape of the object.
  • the system includes a pair of force feedback gloves thereby enabling the use of both hands.
  • the user can bring the virtual object nearer and farther with the same glove.
  • Figure 11 illustrates an example of interaction between haptic, visual and audio interfaces.
  • Figure 11 illustrates the visual representation for a user wearing two instrumented gloves with one hand grasping a tool and the other hand grasping a spherical object.
  • the tool is brought into close proximity of the spherical object and a "tick" sound is played to the user to represent the tool and the object making contact in the simulated environment.
  • the inclusion of audio capabilities in the system enhances the sense of reality that would otherwise be experienced by use of the haptic and visual interfaces alone.
  • a selection of devices, or tools, of different shapes and configurations is made available to users with the expectation that choices as to the types of devices will improve the ability of users to perform tasks in a virtual environment.
  • the system of the present invention incorporate the facilities to support those modes of interaction between a user and simulated, or virtual, objects.
  • the system provide the user with the capability to produce or devise tools with a shape or configuration defined by the user.
  • the Interaction Manager includes two sub-layers as detailed in Figure 12.
  • the largest outer rectangle represents the Interaction Manager, where upper sub-layer represents the Advanced Interaction Manager (AIM) and the lower sub-layer represents the Elementary Interaction Manager (EIM).
  • AIM Advanced Interaction Manager
  • EIM Elementary Interaction Manager
  • EIM provides general services and various kinds of information to support AIM to perform more complex functions.
  • AIM provides outer applications with a uniform interface, including a Command & Message Bus (CMBus).
  • CMBus Command & Message Bus
  • user developed applications or additional controllers may be installed by listening and posting commands or messages via the bus as working in the event-driven mode.
  • the EIM performs relatively basic tasks. It receives user input whilst also providing various devices with signals thus giving force feedback information to users.
  • existing libraries are used such as OpenGL and DirectX for the graphics libraries.
  • the Rendering Engine, Force Maker and Sound & Voice Synthesiser modules provide output signals to the user whereas the Motion Analyser, Gesture Analyser and Voice Command Recogniser receive information via respective input media and abstract them into a form that other components of the system can readily recognise and use.
  • the Rendering Engine is able to supply primitive commands for rendering polygonal or model based virtual objects such as volume texture based virtual objects.
  • the Rendering Engine Upon accepting a primitive command, the Rendering Engine will decompose it into several smaller commands that can be performed by invoking subroutines in the Graphics Library.
  • the Force Maker and Sound & Voice Synthesizer modules operate on a similar basis.
  • the Gesture Analyser receives spatial data associated with several fingers in one hand, and determines corresponding gestures from them. No semantic information related to gesture is determined by the Gesture Analyser.
  • the Voice Command Recogniser operates on a similar basis to the Gesture Analyser.
  • the Motion Analyser operates differently as compared with the Gesture Analyser and Voice Command Recogniser.
  • the Motion Analyser only processes raw spatial data from reach-in devices, such as instrumented gloves, and transforms that data into representative movements in virtual space.
  • the Motion Analyser filters the input data and analyses motion direction and speed.
  • the AIM comprises three components, these being a Command & Message Handler (CMH), a set of Controllers and a Default Virtual Environment Conductor.
  • CSH Command & Message Handler
  • Controllers a set of Controllers
  • Default Virtual Environment Conductor a Default Virtual Environment Conductor
  • a message is a package including three fields, namely a data field, containing spatial data originated from the Motion Analyser, a sender field and a recipient field. It is also preferred that the command is also a package including a code field related to a service to be requested.
  • the Command & Message Handler can receive and analyse data supplied from the EIM and repackage them into messages or commands by using Default
  • Predefined-Command Database or User Defined Command Database The data from the Motion Analyser is converted into messages while those from the Motion Analyser
  • the Default Predefined-Command Database contains records defining a relation between gestures, or words, and commands. Moreover, an action as a gesture sequence, or a sentence can also be matched with a command.
  • the User Defined Command Database contains user defined commands representing the users favourite gestures or expressions. Preferably, user-defined relations have a higher priority as compared with predefined relations. The prioritisation of relations acts to resolve conflict when one gesture, or one expression, is related to two commands defined in the Default Predefined-Command Database and User Defined Command Database respectively.
  • a default application such as the Default Virtual Environment Conductor which constructs and maintains a virtual three- dimensional environment in which all other applications work. It also provides several basic and necessary routines such as managing three-dimensional control panels or opening data files of a default type or any similar utilities. In particular, it is able to supply several virtual tools such as a metal-detector, 3D- point-picker, 3D-scissors or various kinds of probes with different shapes and function for a user to study virtual objects. Furthermore, it should have privilege to control all controllers and applications.
  • the architecture of the preferred embodiment conveniently provides for a user to characterise a virtual world by attaching their applications or their additional controllers into the CMbus.
  • the user-developed application or controller has the same status as the default applications or controllers.
  • application refers to a program that is to perform specific functions
  • controller refers to a program that performs general functions
  • a user should determine what services it can supply and implement those services in an event-driven mode. Then, some specific codes should be added to a system configuration file for registering the controller. Thus, during the AIM initialisation, the Default Virtual Environment Conductor will open the system file and check if those registered controllers and applications are available, and then broadcast a message to allow them to initialise themselves. After initialisation, the new controller like others will be ready to serve any request. Functionality
  • Selecting a position of interest on an object can be a difficult task.
  • the Interaction Manager provides the following facilities to assist the user with the performance of this task.
  • the Motion Analyser includes a filter to reduce that component in the input signal due to human hand vibration.
  • the filter may be calibrated to determine intrinsic parameters by assessing the hand vibration of a user.
  • the filter is effectively a low-pass filter for signals in the three dimensions.
  • the determination of the threshold frequency for the filter is important.
  • an intuitive procedure is implemented to determine the threshold frequency.
  • the user is instructed to move the device along a displayed curve and hence forms a trail representing the users attempt to follow the curve.
  • the actual trail of the device is displayed and recorded.
  • the average difference between the curve and trail is computed and used to determine a first threshold frequency for the low pass filter.
  • the filtered trail is compared with the curve and the average difference is determined. If the difference is not within an acceptable limit, the process is repeated to determine a new threshold frequency. The process continues until an acceptable average difference is reached.
  • Target Finder is also provided for any picking tool designed as a knowledge-based agent who can accept voice commands.
  • a set of commands associated with Target Finder are also defined so that the Command & Message Handler can co-operate by correctly dispatching those commands recognised by Voice Command Recogniser to it via the CMBus.
  • Figure 13 shows a hierarchical representation of the functions involved in the transmission and reception of commands in the IOEES.
  • a three-dimensional snap function provides an efficient object picking technique in three-dimensional modelling. Nevertheless, when a three-dimensional reach- in device is used, the device calibration becomes a relatively difficult problem. Two methods are proposed to overcome this difficulty.
  • the first method involves adding an offset vector to positions conveyed from the reach-in device.
  • device calibration has been performed successfully before any snap operation (ie a transformation T ca ⁇ acquired from the calibration will transform positions from device space into virtual space).
  • the device space is defined as a space where the reach-in device moves and the virtual space is defined as a space where a virtual environment is imagined to be. If there is no snap operation, the relation between the two spaces is as • Pdev, where v i r is in virtual space and P de is in device space.
  • an offset vector Vj has been well defined in the i th snap operation.
  • a position P tag nearby the position P vir is tagged by various snap rules, such as locating the nearest grid corner, to find nearest a point on a certain edge or surface.
  • a vector V as P t ag-Pvir is added to Vj.
  • the result is the desired new offset vector Vj + ⁇ as V'+Vj. So, Vj+ ⁇ will substitute Vj in the transformation as vir- cai • d ev+ j +1 until next snap operation.
  • the virtual stylus is displayed as jumping onto the position P tag .
  • the second method can be used where a high level of realism is required.
  • a reach-in device When a reach-in device is moved to a certain place, the user will observe the virtual stylus moving to the same place.
  • the spatial information for the device from the haptic sense and vision are coincidental. Therefore, the former method, moving the virtual stylus without moving the reach-in device, can't be exploited here.
  • the fact that it is not necessary that the virtual stylus' pinpoint exactly point to those positions that may be selected is helpful for us to implement the operation in such a situation.
  • the tagged position is determined as in the first method, the virtual pinpoint isn't moved onto it. Instead, the tagged point will blink or highlight to show that it has been selected.
  • the Default Force Feedback uses a model based on potential field to generate resistance. It can be described as follows. If the scalar per every voxel is considered as a value of potential, the volume data set can be equivalent to a discrete potential field.
  • the potential field is an electric field
  • a force should be exerted on any object with electric charge.
  • the intensity of force is in proportion to the gradient of the potential field on the position where the object is located.
  • the force direction, along the gradient direction or its opposite direction, is dependent on which kind of electric charges (positive or negative) the object possesses.
  • the potential value is also a factor contributing to the force.
  • the force can be defined as a function of potential gradient and potential value.
  • spatial information about a haptic device is measured by procedures in the Haptics Library and is then passed to the Motion Analyser and the Command & Message Handler.
  • the spatial information is communicated to the Default Force Feedback Controller by way of the Command & Message Bus, together with other control information.
  • a Finite Element Method (FEM) mesh model of the interacting tool is retrieved from an FEM mesh library.
  • the FEM mesh model represents the shape and physical properties of the virtual tool being selected.
  • Kernel refers to a three-dimensional filter function.
  • a convolution operation between the Kernel function and the potential field E pot enti a i is performed as an anti-alias means in order to obtain a smooth result.
  • the entire resistance f pf is an integral of dt pf which acts upon an infinite element in the virtual object. Since the Finite Element Method is employed, the above integration is simplified as a sum of contributions from each finite element. To reduce the computational requirement, only the surface mesh is considered in the preferred embodiment. Hence, each finite element geometrically corresponds to a mesh. The element is depicted by a set of vertices and a set of interpolation functions.
  • ⁇ k (u,v) ⁇ s a unit normal vector
  • (u,v) are local parameters on the mesh in consideration
  • a k (u,v) ⁇ s a physical property. Since the element k can be described by ⁇ k (u,v) ⁇ f> k j ⁇ j (u,v) where w (u,v) is the j th
  • the total resistance force based on a potential field may be determined as:
  • the computation of the torque exerted on the virtual tool by the potential field is determined in a similar procedure as the force.
  • the torque on element k is
  • the total resistant torque is determined as the sum of all the ⁇ M* .
  • collisions or contact between the virtual tool and a volume object can be simulated. Since g will be greater on the boundary of a volume object according to the development of the potential field, greater force feedback will be sensed when approaching a boundary.
  • the audio response provided by the potential field in the preferred embodiment may be described as follows.
  • the input generated from a potential field for the audio device is a scale value.
  • the volume of the audio output is determined to be proportional to the distance p re-calculated in the potential field.
  • people may also deploy different sounds to represent the distance.
  • Direct commander distance broadcasting can also be employed for the awareness of a user.
  • a specified coefficient should be used for the purpose such as:
  • the coefficient v can be customized depending on the application.
  • the Default Vision Controller will support Default Force Feedback Controller by performing the task as probe rendering while Default Force Feedback Controller takes control to determine the behaviour of the "ghost probe". The following example is provided to detail how Default Force Feedback Controller makes such determination.
  • the Default Force Feedback Controller will refer both these motion suggestions and the current spatial information of the device to determine whether the device diverges from the desired path. If it does, the Default Force Feedback Controller will render the ghost probe visible, move it to the suggested position, and then keep it there, while resistance is applied to the reach-in-device forcing it toward the ghost probe. From the user's perspective, it appears that the ghost probe splits from the real probe. Meanwhile, the user will feel less resistance encountered when moving the real probe toward the ghost probe. When the real probe is moved sufficiently close to the ghost probe, the ghost probe will re-merge with the real probe and disappear.
  • a virtual tool is independent of the physical device. Accordingly, the existence of a virtual tool only depends on whether user can sense it and has nothing to do with whether there is an actual device associated with it.
  • a user can "grasp" a virtual tool with a certain gesture while the respective haptic sense is fed back to the fingers and palm. Moreover, a user should be able to feel what has occurred from his fingers when the "grasped” tool touches or collides with other virtual objects. Since these tools are only in virtual space, they could be designed with any shape, configuration or characteristic.
  • an ideal tool for exploring cells may include a pen-like shape with a thin long probe at one end. It's also expected to have some intelligence to help a user find targets of interest, as it's able to guide the user by means of voice, vision and/or haptic sense as well as accepting user spoken commands.
  • the Target Finder the respective techniques for three-dimensional snap and "Real & ghost Probe" as discussed previously can be integrated to support such a virtual tool.
  • the virtual tool can be regarded as a combination of a group of relative functions. Therefore, the concept provides a means for a user to recombine various features to a few convenient and powerful tools that he requires.
  • Several default virtual tools are provided relating to contour detection, point tagging, shape modification and deformation.
  • the system provides application developers the ability to define their own tools.
  • a hierarchical contour model is used to represent an object extracted from volume image data.
  • This model can be further converted into other geometric models such as a surface model or a mesh model.
  • the Extractor includes three different modes of operation, namely Tracking Mode, Picking Key Points Mode and Guided Mode. In the following sections, the architecture and implementation of the Extractor will be discussed. The role of the Extractor in operational procedures is subsequently described.
  • the Extractor includes two virtual tools referred to as the Cutting Plane Probe and the 3D-point-picker.
  • the 3D-point-picker In their implementation, several previously described techniques or functions such as 3D snap, Target Finder, "Ghost & Real Probe” and audio effects are all integrated to facilitate the interactive extraction.
  • the Cutting Plane probe is preferably has the shape of a magnifying glass, namely, a handle with a loop at one end. As a result, it is a tool with six degrees of freedom. Furthermore, the junction between the handle and the loop is preferably rotatable, thus enabling the loop to rotate when some virtual obstacle is encountered or it's attracted or driven by some force in the virtual environment. In the Extractor, the cutting plane technique has been exploited to control such rolling.
  • the shape of the loop of the Cutting Plane probe can be deformed to suit the shape of any closed planar curve.
  • the loop is a circle with a user definable radius.
  • the 3D-point-picker includes the Target Finder as a component and thus it is a virtual tool that responds to spoken commands. Furthermore, if some domain knowledge database is available, the picker will have more data from which to help the user. Certainly, it is preferred to integrate the constraint display and other facilities for 3D-point reaching.
  • the following paradigm describes the Extractor, where the Editor acts a co- worker receiving the Extractor's output and providing the Extractor with a respective parametric object model.
  • the Detector module forms the core and others are its peripheral parts. Each peripheral part is responsible for handling a respective working mode while the Detector processes those tasks as contour detection submitted from the peripheries.
  • the function of the Detector is to extract a contour on a cutting plane that is determined automatically.
  • the cutting plane is determined from a known two- dimensional image processing technique, the details of which are not described in this specification.
  • the input includes a closed curve and a supporting plane. The steps of the algorithm are illustrated below where the cutting-plane technique mentioned before is used.
  • a plane referred to as the cutting-plane, can be determined with the vector and the middle point m between the two central points (bi and b 2 ).
  • the cutting-plane is completely determinable by the two conditions such as the vector is its normal and the midpoint m is on it.
  • the method from (a) to (d) for determining cutting-plane is so-called cutting-plane technique.
  • the potential field model plays an important role as it acts to exert an attractive force on the deformable model which is placed on the cutting-plane to extract the desired contour.
  • a deformable model which is geometrically a closed curve and includes various mechanical properties such as elasticity and rigidity, is arranged to enclose the object on the cross section.
  • the forces calculated based upon the potential field modelling will attract the model to approach the boundary of the vascular object. Since the model is deformable, both the shape and location of the model will be influenced. When the force of attraction and the internal forces resulting from the propertied of elasticity and rigidity of the model are in equilibrium the model will not be deformed any further and at that stage it should correspond to the contour of the cross section of the vascular object.
  • Tracker is to track user-controlled reach-in device, and is responsible for reconstructing a sweeping volume to catch user-desired object in volume data.
  • the Tracker records the spatial information of the CP-probe while incrementally constructing the sweeping volume comprising a series of closed curves and checks the validity of the volume when user's input isn't correct. Once it detects some invalid input, it will inform the Cutting Plane probe and the latter will let the Default Force Feedback Controller start up the corresponding constraint display as using "Ghost & Real Probe".
  • the Tracker is only required to determine the next position to which the probe should move, and feed back the expected position and trend to the Cutting Plane probe. It is preferred to use a method referred to as "2- steps-forecast" to evaluate the expectation.
  • the Tracker will exploit a curve to locally fit the trail of the moving loop's centre just around its latest position so as to estimate the position and trend.
  • the estimate information will be passed to the Detector with other required parameters, where a more accurate evaluation is effected and the result, a position where the loop's centre should move, is fed back to the Tracker.
  • the Tracker will pass those closed curves to the Detector continuously while accepting and rendering the extracted contours.
  • the Explorer reconstructs a surface or a curve passing through several points given by a user while invoking the Detector to extract contours near the surface of the curve.
  • the surface is expected as an approximate representation of the user-desired object's surface and the curve is as an approximate skeleton line of the object.
  • a description of the Explorer working with skeleton line is provided.
  • the Explorer plays the role of a pathfinder after accepting a list of points, denoted as key-point-list (KPL). It will then attempt to explore the path the user desires. Namely, the Explorer must interpolate several points or a curve between every two adjacent KP's. A relatively simple method for this is linear or polynomial interpolation.
  • Figure 14a shows how the Explorer works.
  • the elements in KPL is noted as q°, q 1 , q 2 ... q n .
  • R 0 a radius
  • Vo- The radius defines the maximum length for one step Explorer takes.
  • the vector defines a direction, along which Explorer will step.
  • Explorer tries to predict the next position p-T, which is almost along V 0 and is away from p 0 with distance less than Ro.
  • the tangent V of desired path on position p-i' will be evaluated together with R-i'.
  • the predicted triple (p-i', Vi', R/) from the Explorer then are passed into the Detector.
  • the Detector will yield a triple (pi, Vi, R-i) expected as a precise version to the input, where pi is on skeleton line and close to pi', Vi is the tangent vector of skeleton line on pi and Ri is the radius of the minimum enclosing-circle for the extracted contour around pi.
  • the new triple returns to Explorer.
  • Explorer treats pi as current position and (p-i, V-i, Ri) is ready for the next step adventure.
  • bi, b 2 and m are the auxiliary positions for Detector refining. Reader can refer to the section about Detector.
  • Explorer chooses a v at random firstly and validates it to see if there is no surface across the path between pj + ⁇ ' and pj. If it is right position, then no further computation Explorer will take. Or else, it will choose another v randomly to try.
  • Another validation is that pi + i' must satisfy the condition
  • >
  • Discreter plays the role of an advanced converter from parametric object model to discrete object model as the contour model. It is a bridge across the Extractor and Editor as observed in Figure 15. It processes a model loader and a model converter internally. There are two methods to pass a parametric object to it. The first is by way of the Editor and the second is by importing existing models directly from an external file. The parametric object models are regarded as shape hints in the Discreter and can be exploited to aid in the extraction. If such a guide is available, it will assist the exploration of the desired object in volume data where the awkward problem of branch selection would be avoided.
  • the Extractor and Editor construct a loop for the incremental extraction. There are also several loops in Editor to provide various ways to modify the extracted result or shape hint. Thus, it can work firstly to feed the Extractor a shape hint to guide its extraction and secondly to work as a post-process to refine or trim objects determined by the Extractor.
  • the Editor consists of three parts, namely, a Modeler, a Selector and an Adjuster.
  • the Modeler constructs a parametric model based on a spline technique from a discrete model consisting of a series of contours. There are two alternatives as spline-interpolation and spline-fitting to accomplish the task. The Modeler works automatically and no interaction is required.
  • the Selector is responsible for the selection of several key points either automatic or interactively.
  • the key points are referred to as those points scattered on important positions in the parametric model.
  • the user is also expected to pick those points that are around some lost detail in the parametric model, as it would greatly assist the refinement procedure.
  • the Adjuster manages the user's interaction and provides the user with the 3D- point-picker to control those selected key points.
  • the picker can be used to access and control any key point with less effort as described in the section "Guided Mode".
  • the user can alter the shape under a certain constraint by adjusting the key points.
  • a three-dimensional volume is loaded and displayed along side with audio and haptic controls.
  • the first step involves a preliminary extraction of a three- dimensional object from the volume data.
  • the user refines the extracted model with tools provided in the Editor module.
  • the user could output or save the model when he or she is satisfied with the results.
  • the Extractor provides a user with three interaction modes such as Tracking Mode, Picking Key Points Mode and Guided Mode. Under each mode, the user is expected to give the Extractor a shape hint in a certain form of his desired object by use of one, or several, corresponding virtual tools. Thus, we put emphasis on the two questions as what form should be given and how to give the form.
  • This mode is intended to provide a user with the virtual tool CP-probe and expects the user to catch a desired object in volume data. The entire motion will be tracked by the Extractor which will then be exploited as a clue to benefit the extraction.
  • the user can move the CP- probe freely before he selects a target object. At the same time, he can reshape the loop. Then, he is expected to drive the probe to sweep a volume where the target is almost fully contained. Meanwhile, a series of contours is displayed approximating the surface that is the nearest surface on the desired object to the sweeping volume. Therefore, if there is any cavity in the object, it cannot be extracted unless the user sweeps another volume that encloses it sufficiently closely. During the sweeping operation, a constraint will aid the user to perform the sweeping more accurately, as to warrant the volume almost enclosing the "target" that is perceived by the Tracker.
  • the user is expected to handle the 3D-point-picker and pick some key-points (KP) scattered on some shape hint of the desired object, such as skeleton or surface or even a volume.
  • KP key-points
  • the Explorer will reconstruct a polygonal surface from those KP's, and then slice it up with a stack of parallel planes. The intersection, as piecewise lines closed or unclosed on each section, will be transferred to the Detector. The Detector will attempt to extract those contours, each of which is the nearest to a certain piecewise line respectively.
  • the Explorer will attempt to trace the line.
  • picking points along the central line is the most convenient.
  • its skeleton line is determinable and unique.
  • the Explorer can do more for a user since these objects are simpler in topology.
  • the Explorer can operate automatically. Nevertheless, the degree of automation of the Explorer depends upon the quality of the KP's. In other words, if the specified KP's are unable to define a shape hint well, the Explorer pauses and prompts the user for further instruction.
  • the instructions to a user may include questions on what to do and how to do it, or to suggest that the user input some parts of the shape hint again. In this mode, the user need only pick some points on a surface or along a skeleton.
  • the 3D-point-picker plays a very important role here since it can directly influence the efficiency and effectiveness of such an operation.
  • the techniques for reaching a position are well established such as calibrating a filter to reduce spatial data noise, 3D-snap and intelligent Target Finder. Two examples for point approaching have already been described in the section entitled "Reach A Position".
  • a user inputs a curve or surface as a shape hint directly instead of a number of key points.
  • a shape hint such as a skeleton or surface. More conveniently, a user can import some existing shape hint directly.
  • These shape hints are referred to as guides.
  • the Extractor will perform registration between a guide and the user-desired object.
  • registration methods for different guides.
  • the registration method involves adjusting it to be contained in the object and being able to represent the skeleton well.
  • a surface is regarded as a surface-guide, the registration deforms it according to the shape of the desired object while a metric evaluates the difference between them.
  • an image-based model from a certain atlas is input as a guide, a seed-point-based registration will be executed.
  • the Discreter module will sample that curve or surface, then provide the required parameters to the Detector similar to the Explorer. The difference is that the Discreter will encounter less difficulty under the guide when it tries to find the next position where the contour should be extracted. This is because there is no possibility for the Discreter to lose its way during such an exploration.
  • the 3D-point-picker can aid a user to perform various tasks.
  • the picker is expected to be a convenient tool for modelling such guides. For example, if a user intents to design a curve to represent the skeleton of an object. In this instance, the user is expected to input several control points to model a curve. The user can give some voice commands to control the curve shape such as altering spline bases geometric continuity and adjusting and jiggling control points. During the process of adjustment, the user will perceive haptic sensation. These haptic sensations are the result of the synthetic effect of multiple forces, such as elasticity of the curve and artificial repulsion based on potential field theory.
  • the force originating from the constraint that the curve should be a skeleton line also adds to the haptic sensation. These sensations are useful for the user to work with three-dimensional modelling. It should be noted that the tool could also be exploited in any situation when editing is required.
  • Guided Mode the result may be more accurate and more efficient than that obtained from Picking Key Points Mode, however, a guide must be modelled or available beforehand.
  • Another way of providing guides while avoiding or lessening the burden of modelling is to use either Tracking Mode or Picking Key Points Mode to generate a contour model.
  • As an initial model it is submitted to the Editor participating in some editing or refining loop.
  • a parametric model of the same object will be generated by the Editor, and then it is regarded as a guide for the Extractor where the extraction in Guided Mode will be executed again.
  • the Guided Mode enables users to perform a progressive extraction.
  • the user will have an option to start the Editor inside the Extractor or start it outside the Extractor where the user needs to acquire the contour model manually.
  • a parametric model is constructed and rendered. The user then chooses the mode to select key points. If the automatic mode is chosen, several key points, which can keep the shape of the model, are selected and highlighted. They are now ready for the user's adjustment. If user selects the interactive mode, the user is required to tag on some points as the key points on the parametric model.
  • a key point Once a key point has been selected, the user can then move it immediately. It is suggested that he should add some key points near some details that haven't been extracted or have been extracted incorrectly. Such types of key points are known as detail key points. After adjustment, a new parametric model that has interpolated the key points is reconstructed. If the user requires a refinement, he should initiate the Extractor with the new parametric model.
  • the Extractor will establish a new contour model corresponding to the parametric model. Therefore, the lost or imperfect detail would be extracted if respective detail key points were given during the key point selection.
  • the process is a progressive refinement loop for extraction and edit. During the process, the user may obtain the result at any time.
  • the basic workflow is outlined in Figure 16.
  • the Interaction Manager is omitted for convenience.
  • the discrete model (contour model) is first passed to the Editor as input. This model is then flows to the Modeler (B). Afterwards, a parametric model, which is a result of either an interpolation or fitting method, is passed to the Selector (C).
  • the Selector includes three modes such as automatic mode, interactive mode and simple mode, which can be chosen by the user.
  • the selected key points will be conveyed to the Adjuster (D). After adjustment, those key points with new positions are returned to the Modeler.
  • processes B, C and D are executed again. Thus, the processes B, C and D form a loop, called as basic editing-loop.
  • the user has a right to choose the simple mode to re-use KP's selected in the previous editing loop to avoid reselecting them again.
  • the Selector simply conveys the parametric model and those KP's selected in the previous editing loop to the Adjuster.
  • the module E is the Extractor working in the Guided Mode. Since the Extractor performs better in Guided Mode, the loop may be considered as a progressive refinement.
  • This loop is referred to as the refining- loop, and may invoke any combination between the refine-loop and basic editing-circle, such as the sequential process steps BCDBEAB, BEABCDB, BEABCDBCDB and so on, thus forming a complex editing-loop.

Abstract

In one aspect, the present invention provides a method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the association of physical properties with identified objects said properties including at least visual and haptic properties of the identified objects; and incorporating said identified objects and associated physical properties into a system including at least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting with the simulated three-dimensional objects, or any part thereof, said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the least one haptic interface device in accordance with user requests.

Description

A VIRTUAL SURGERY SYSTEM WITH FORCE FEEDBACK
Field of the Invention
The invention relates to a novel means and method of extracting and editing three-dimensional objects from volume images utilising multimedia interfaces. The means and method is particularly suited to the extraction of objects from volume images produced by Magnetic Resonance Imaging (MRI), Computed Tomography (CT) or ultrasound equipment.
Background of the Invention
The extraction of three-dimensional images from volume data consisting of a stack of two-dimensional images is attracting significant attention in light of the general recognition of the likely impact that computer integrated surgical systems and technology will have in the future.
Computer assisted surgical planning and computer assisted surgical execution systems accurately perform optimised patient specific treatment plans. The input data to these types of systems are volume data usually obtained fforri tomographic imaging techniques. The objects and structures embedded in the volume data represent physical structures. In many instances, the medical practitioner is required to feel the patient in the diagnosis process.
An example where systems are used for the extraction and editing of three- ' dimensional objects from volume images is interventional angiography which has generated significant research and development on the three-dimensional reconstruction of arterial vessels from planar radiology obtained at several angles around the subject. In order to obtain a three-dimensional reconstruction from a C-arm mounted XRII traditionally the trajectory of the source and detectors system is characterized and the pixel size is computed. Various methods have been proposed in the past to compute different sets of characterization parameters.
Different approaches have also been proposed for vascular shape segmentation and structure extraction using mathematical morphology, region growing schemes and shape features in addition to greyscale information. In one particular method to extract a three-dimensional structure of blood vessels in the lung region from chest X-ray CT images, the proposal includes a recursive search method of the cross section of tree structure objects. The reconstruction of a three-dimensional volume of the vessel structure has been demonstrated in less than 10 minutes after the acquisition of a rotational image. In this particular instance, the volume rendered three-dimensional image offers high quality views compared with results of other three-dimensional imaging modalities when applied to high contrast vessel data.
In previous studies, an interactive vessel tracing method has been used to obtain a cerebral model. As volume data sets often lack the resolution to allow automatic network segmentation for blood vessels, this approach provides a free-form curve drawing technique by which human perception is quantitatively passed to the computer, using a "reach-in" environment as provided by a "Virtual Workbench". The "Virtual Workbench" is a virtual reality workstation that provides three-dimensional viewing by use of stereoscopic glasses and time split displays rendered using a computer graphics workstation.
This precise and dexterous environment transforms perception to allow the relatively easy identification of vessels. The tools exploit the reach-in hand- immersion abilities of the Virtual Workbench and allow sustained productive work and the generation of three dimensional texture sub-volumes to allow interactive vessel tracing in real-time. A set of magnetic resonance angiograph (MRA) data of the human brain is used for this purpose and a total of 251 segments of the cerebral vessels have been identified and registered based on the connection with the primary vasculature of the Visible Human male Data (VHD). The VHD represents the effort of the National Institute of Health in the United States to produce a complete, anatomically detailed three-dimensional representation of normal male and female bodies.
Generally, the traditional Human Computer Interaction (HCI) methodology is considered to be cumbersome. For many intricate or complex functions there are too many buttons to be depressed on the keyboard, mouse or other interfacing devices. These interfaces are not natural for human interaction, particularly in the field of medical applications since surgeons do not have the ability to use their hands to operate a computer and often communicate with their assistants even in the process of changing tools.
For many applications there is a need for an improved human machine interface,, that is better suited to the needs of humans and in particular for tasks such as; those performed by surgeons.
The above discussion of documents, acts, materials, devices, articles or the like is included in this specification solely for the purpose of providing a context for, the present invention. It is not suggested or represented that any or all of these matters formed part of the prior art base or were common general knowledge in the field relevant to the present invention as it existed before the priority date of each claim of this application.
Summary of the Invention
In a first aspect, the present invention provides a method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the association of physical properties with identified objects said properties including at least visual and haptic properties of the identified objects; and incorporating said identified objects and associated physical properties into a system including at least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting with the simulated three-dimensional objects, or any part thereof, said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the least one haptic interface device in accordance with user requests.
Preferably, during interaction between the human user and the simulated objects ancillary visual information is presented to the user.
It is also preferred that the method includes the association of audible properties with identified objects, or parts thereof, and the system includes at least one audio interface device. With the inclusion of an audio interface device, the method may also provide ancillary audio information to the user during the user's interaction with simulated three-dimensional objects.
It is particularly preferred that haptic interface devices include the facility to receive signals from the system representing associated haptic properties of simulated objects.
Preferably, the interaction between the various interface devices of the system is co-ordinated such that the correct registration of physical properties with the identified objects is maintained for the various interfaces during interaction between the simulated objects and the human user.
The method may also include the step of editing the representation of a simulated object in order to refine the representation that was originally derived from volume images. Preferably the step of editing the representation of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user. The step of editing the representation of a simulated object may also include the use of an audio interface device by the user during the editing process.
In a particularly preferred embodiment, the method includes the generation of a discrete model of identified objects derived from volume images zmά the atep of editing the representation of a simulated object in udes the application of sπ interpoiative scheme to produi e a continuous rnodel from the . discrete mode! derived from volume images. Preferably, the method aϊso includes an iterative process whereby an edited version of an object is compared with a ptavious discrete model of the object determine the difference between the editati version and the previous discrete model of the object and w ether the diff πe e® is within an acceptable limit, the edited version being converted into a iscrel^ model for the purpose determining the difference with the previous d^scretf model.
In another aspect, the present invention also provides a system for intes acting with simulated three-dimensional objects, the system including representations of three-dimensional objects identified from volume images, the representations including physical properties associated with each of the objects relating to at least visual and haptic properties of the objects, and at least one visual interface device and at least one haptic interface device enabling a user to interact with a simulated three-dimensional object the interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
Preferably the system includes the provision of ancillary visual information to a user. It is also preferred that the system associates audible properties with identified objects and also includes an audio interface device and the generation of signals by the system for transmission to the audio, interface device in accordance with the audio properties associated with the objects^ It is also preferred, that the 5. ... systeo? inelude.the provision of ancillary audio information to a; user • v
In a p.ar cularϊy preferred embodiment, the haptic' interface device is capable of . receiving signals from the system corresponding to the associated haptic ; . properties, for simulated objects, the signals preferably conveying a haptic 0 . , sensation to the user as a result of interacting with the objects . .
The system may also include a voice recognition facility capable of receiving aηd interpreting spoken requests from a user thus enabling the user to issi ; commands to the system without necessitating the use of one or both of th§, 5 users hands. This is particularly advantageous in simulating medical procedures, wherein a surgeon usually uses both hands and issues spoken requests to,, assistants,
It is also preferable that the system include the capability to co-ordinate the 0 interaction between the various interface devices such that the correct registration of physical properties with the identified objects is maintained for the various system interfaces during interaction between the simulated objects and a human user.
5 In yet another aspect, the present invention provides a method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the development of a model to represent those objects; the association of physical properties with identified objects in the 0 developed model said properties including at least visual and haptic properties of the identified objects; and incorporating said model and associated physical properties into a system including at. least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting . with the simulated three-dimensional objects; or any part thereof; said interaction including the generation of signals by the system for transmission to the at least ' ;■ one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the at least one haptic interface devjce in accordance with user requests. . ... .
Preferably, the method includes the step of editing the model of a simulated object in order to refine the model of the simulated object derived from volume images. In performing the editing process, it is preferred that the methqd includes the use of the haptic and visual interface devices by the user. The method may also make use of an audio interface device by a user.
The method preferably includes the derivation of a discrete model from volume images and the step of editing the model of a simulated object include the application of an interpolative scheme to produce a continuous model from the discrete model derived from volume images. In a particularly preferred embodiment, the method includes an iterative process whereby an edited version of a model is compared with a previous discrete model of the object to determine the difference between the edited version and the previous discrete model of the object and whether the difference is within an acceptable limit, the edited version being converted into a discrete model for the purpose determining the difference with the previous discrete model.
Preferably, the model generated as part of the method is a potential field model representation of identified objects. The model may also include a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties with that model. ln yet a further aspect, the present invention provides a system for interacting with simulated three-dimensional objects including a model of three-dimensional objects identified from volume images, said model including physical properties associated with each of the objects relating to at least visual and haptic properties thereof, and at least one visual and haptic interface device enabling a user to interact with a simulated three-dimensional object said interaction including the generation of signals by the system for transmission to at least the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
Preferably the system includes audible properties associated with the identified objects and also includes an audio interface device and the generation of signals for transmission to the audio interface device in accordance with the audio properties associated with objects.
In a particularly preferred embodiment, the system includes a haptic interface device capable of receiving signals from the system corresponding to the associated haptic properties for a simulated object, or part thereof, the signals preferably conveying a haptic sensation to the user as a result of interacting with the object.
It is also preferred that the system include a voice recognition facility capable of receiving and interpreting spoken requests of a user thus enabling the user to issue commands to the system without necessitating the use of one or both of the users hands.
Preferably, the model incorporated in the system is a potential field model representation of identified objects. The model may also include a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties with that model. Description of the Preferred Embodiment
The present invention will now be described in relation to a preferred embodiment.
In particular, a preferred embodiment of the present invention relating to the specific field of medical imaging and subsequent extraction and editing of the vascular system of a patient will be described to highlight the best method of performing the invention known to the applicant. However, it is to be appreciated that the following description is not to limit the generality of the above description. The description of the preferred embodiment relates to an Incremental Object Extraction and Editing System (IOEES) and refers to the accompanying drawings in which:
Figure 1 illustrates a highly abstracted architectural representation of the IOEES;
Figure 2 shows a representative implementation of an IOEES;
Figure 3 represents an overview of the basic components involved during the implementation of the method of IOEES;
Figures 4a and 4b show diagrammatic representations of a suggested implementation of an IOEES where a user is seated at a workstation with various interface device available to them for interacting with simulated three- dimensional objects;
Figure 5 illustrates a stack of two dimensional images as obtained from a scan of a patient;
Figures 6a and 6b illustrate a simple transformation function used in the development of a potential field model; Figure 7 illustrates the creation process of a potential field;
Figures 8a, 8b, 8c, 8d and 8e show various shapes and configurations that may be used for probes;
Figure 9 illustrates a suggested implementation of an object manipulator for use in an IOEES;
Figures 10a, 10b and 10c illustrate a suggested implementation of a force feedback glove for use in an IOEES;
Figure 11 illustrates a possible scenario involving the use of two force feedback gloves, a probe and a spherical object;
Figure 12 illustrates a highly abstracted architectural representation of the Interaction Manager of the IOEES;
Figure 13 shows a hierarchical representation of the functions involved in the transmission and reception of commands in the IOEES;
Figure 14 shows the preferred process for the calculation of resistance based upon a potential field model for a force feedback controller;
Figures 15a and 15b illustrate the operation of the Explorer attempting to develop a path along a list of points;
Figure 16 illustrates the primary functions involved in the refinement of a continuous model of three-dimensional objects obtained from volume images and a first discrete model thereof; and
Figure 17 illustrates the steps involved in an example operation of the IOEES in extracting and editing a three-dimensional object from volume data. Figure 1 illustrates an architectural representation of the IOEES depicting visual, audio and haptic interfaces as part of a multimedia workstation that is serving the user. The registration and coordination of these interfaces is controlled by the Interaction Manager. The Interaction Manager interacts with the Editor and Extractor modules to provide a continuous flow of haptic, visual and audio feedback to the user during the object extraction and editing processes. The main source of data used in this system is sets of volume images. However, 2D and 3D atlas or other geometric or physical data could be used to enhance the extraction and editing processes.
Figure 2 shows an implementation of IOEES where the user wears a headset and holds a pair of haptic tools, in this instance a force feedback probe and an object manipulator. The computer system displays a rendered volume in plain or stereo 3D.
Figure 3 is an overview of the method used in this IOEES. The system polls data inputs from both haptic and audio means. These inputs will generally represent the volume object being edited or a selected region of an object being extracted. The inputs may initiate responses from the computer to the user through the visual, haptic or audio means.
Figures 4a and 4b show diagrammatic representations a suggested implementation of an IOEES workstation. The workstation in an IOEES includes a computer system that has sufficient graphic capabilities to render a set of volumetric images in plain or stereo 3D.
The workstation in Figure 4a also includes a 3D-interaction tool with force feedback, referred to as a force feedback probe, that enables a user to pick and move a point or an object in the 3D space of the set of volumetric images displayed on the computer monitor. Figure 4a also includes a hand held device, referred to as an object manipulator that enables a user to perform zooming, 3D rotation and translation of the volumetric object in the virtual space. The workstation also includes a speaker and a microphone, possibly in the form of headset to provide audio interaction between user and the computer.
An alternative workstation configuration is depicted in Figure 4b where the workstation replaces the force feedback probe of Figure 4a with a pair of instrumented gloves. Figure 4b also depicts a workstation that includes a holoscope in place of the stereo emitter and glasses, and the headset with a more conventional microphone and speaker arrangement.
Volume Images and Potential Field Model
Volume images are generally provided in the form of a stack of two-dimensional images in the axial direction of an object that is to be considered. A diagrammatic representation is shown in Figure 5 where a stack of images are generated correlating with two dimensional slices taken generally along the longitudinal axis of a patient.
Volumetric images serve as the source of input to the method of the present invention. Virtually any scanner can produce suitable axial images, or can at least produce images that can easily be converted to axial images.
For example, rotational CT scanners capture patient data in the form of projection images. From these images a method known as Back Projection technique can be used to construct volume images.
Volume rendering techniques such as ray casting and projection techniques have traditionally been used in the visualization of volume images. Advances in computer hardware and software have greatly improved the speed and accuracy of such visualization. In a preferred embodiment of the invention, a potential field model is used as the data source for force computation and audio generation. The potential field is defined as a volume of force vectors.
For example, the input to the process for creating a potential field model could be the output of two-dimensional images from a CTA scanner. In this instance, the data will contain both vascular and bone structures and to differentiate between these two types of structure, the process includes an analysis of the intensity of the pixels forming the two-dimensional images. This enables the derivation of outlines in the two-dimensional images corresponding to the vascular and bone structures. When considering a volume of images, the different structures are determined by an analysis of the intensity of each voxel forming the volume image. For each plane of the volume images, the method preferably first derives force vectors for that plane, the force vectors on each plane being independent of the other planes in the volume. A collection of planes of force vectors contributes to the establishment of a potential field. The construction of a potential field from volume data is illustrated in Figures 6a and 6b.
For illustration purposes, it is appropriate to consider a plane of black and white pixels having a dimension 8 x 8 as detailed in Figure 6a. A relatively straightforward transformation function may be used to associate a black pixel with a value of 1 , a white pixel that is not bordering any black pixel with a value of 0, a white pixel that is bordering a black pixel with a vector of magnitude 0.5 and direction given in Figure 6b, and a value of 0.5 in an ambiguous case.
Of course, it is possible to have more complex schemes that involve the physical properties of the objects in the volume, and operate on pixels with a wider range of colours. The force vectors resulting from this step of the method is referred to as a potential field since the force vectors define the tendency of moving from values of low to high potential similar to an electrostatic potential field.
Figure 7 provides a more detailed illustration of a preferred process of creating a potential field model. The input to the creation process is multi-modality volume data, which refers to data that consists of one or more groups of volume data with different imaging modalities. This is likely as a patient may be subjected to MRA scanning in addition to CT scanning.
In order to use more than one set of volume data an added process of synthesising the potential value of each set of volume data is required to derive a single potential field. The library identified as "Property of Imaging Modalities" used in this synthesis process stores parameters for equalising different modalities.
A voxel is the smallest unit cube of a volume and an inter-voxel similarity comparison is used to determine whether a voxel belongs to a particular object. Local edge detection determines the boundary area of the regions and the classification process labels the object regions.
For each structure or object embedded in the volume data, the regions containing the object are labeled based upon information including the image intensity, the intensity gradient between pixels and information from the material properties library. The material properties library includes information regarding the mapping function of physical properties based upon voxel intensities. As a result of the execution of this part of the method, each structure or object embedded in the volume data is labeled as being of a particular type such as bone or soft tissue. Another process, referred to as "Model Guidance", is optional and makes use of a reference model or atlas and can be used to guide the three-dimensional region of interest and hence more accurately label object regions. After an object has been identified and labeled, the physical properties of the regions are assigned. In the preferred process , the only considerations with respect to physical properties are Young's Modulus and Mode of Rigidity. The object-shortest-distance is defined as the shortest distance between a voxel and the regions defining the object. It is determined for each voxel in the volume and the potential value is then determined based upon a polynomial combination of intensity, physical properties and object-shortest-distance.
In the preferred embodiment, the relationship determining the potential value for each voxel is as follows:
F(x,y,z,e,r) = ^(dx 2 A- dx 2 +dx 2) A-f(e) + g(r)
Where F is a function in the three-dimensional space, di (i = x,y,z) \s the distance in the Λh direction between the point (x,y,z) and its given reference point A of the structure. The point A in many cases can be considered as the nearest point at a specified structure. Hence <^can be expressed as
dy = Py - Ay, d = P - A,
Function f(e) represents the mechanical properties of tissue point P. In general, the structure is nonlinear elastic or visco-elastic. But in the preferred embodiment, for simplicity, a linear model is employed to express the force- displacement or stress-strain relationship as: σ = Dε
Where σ and ε are stress and strain vectors, and D is a constitutive matrix. As linearity is required, the matrix D is constant for a given position. The equation is occasionally referred to as Hooke's generalized law. In a homogeneous case, only two elastic parameters, Young's modulus E and Poisson's ration v, are present. The mass density is also a component part in this function.
Function g represents the image intensity or the gray value of Point P. It is also related to the scanning mechanism, the scanning technique and other associated geometric information. Function g is used for image processing but in some cases, its value may also relate to mechanical properties, in particular, the mass density. In its simplest form, Function g equates to the image intensity.
The other features regarding the defined potential field can also be expressed as: d = dx + d + dz ,
Figure imgf000017_0001
For computation, the proposed potential field can be quantised with respect to a grid space. For real-time calculation of object interaction, the potential components are pre-calculated and stored in a defined data structure. Hence, the computation of a given point P(x,y,z) will be implemented using a linear-cubic interpolation such as:
c = ∑N,c,
Where c( (/=1,2,...,8) is the component value at the rth node of the cubic. N,0' = 1,2,...,8) is the rth shape function which can be expressed as: γ - C + ff.Xl + W.Xl + gg,) 8
Where ξ,η and ς are coordinates in a parametric space.
After the potential values of all objects are obtained independently, a synthesis procedure is invoked to combine the results for each type of volume data considered. The potential field model provides a basis for the establishment of force relationships such that realistic forces providing resistance to the passage of the tool can be fed back to the user by way of a haptic interface. The force vectors, which are stored as part of the potential field model, describe the direction and magnitude of force that should be conveyed as force feedback to a user for various points in the model. Since the potential field is established based on object properties and their spatial relationships, which is related to the "minimum distance to an object", such force feedback can provide realistic sensation with respect to an object's shape and "feel".
The same volume of force vectors can also be used to associate and generate audio signals based on some form of relationship between the object and the device or tool interacting with the object. For example, as the tool approaches an object, an audio signal could be generated to signal a potential collision. The sound upon collision and possible damage to the object could be readily adapted to the potential field. In addition, suitable audio messages could also be activated to alert the user to particular situations. The association of sounds with various points in the model provides the user with a collaborative sensation of the object at a particular point. The use of a potential field model in the preferred embodiment provides additional information or dimension with respect to revealing the details of an anatomical structure. However, other information such as distances between objects and other parameters may also be presented on the visual display.
The development of a potential field model for a volume image and the derivation and association of visual, audible and haptic sensations for various points in the model as described above for the preferred embodiment, provides for the correct registration of the various types of sensory feedback as the user interacts with the objects identified from the volume image. Haptic and Audio Interfaces
Voice Command and Control
Audio interfacing between the user and the system enables the implementation of a voice command and control application thereby allowing the user to more closely concentrate on the task at hand. In many medical procedures, audible commands are the primary method of communication between a surgeon and his assistants. Effectively, in the present invention, the computer system executing the model may become the assistant. The incorporation of a voice recognition facility would also enable a user to issue commands to the computer.
In reality, an assistant or patient in a medical procedure might respond to a surgeon's spoken communication. The incorporation of audio interfacing between the user and the computer provides for the simulation of this aspect of a medical procedure.
For example, the user may be navigating to a branch point which is of particular importance. In this instance, the system may prompt the user by way of synthesized speech warning that a special condition is about to occur. An audio interface is particularly useful for reducing the need for visual cues to a user which would have the effect of crowding the visual display device which will be providing the user with simulated visual representations of the various objects.
Similarly, a surgeon will generally speak to his assistant for assistance or information during a medical procedure. To accommodate this aspect, the system may include a microphone, the audio input preferably being digitised and matched with actions corresponding to the spoken commands. The voice commands could conceivably relate to requests for various tools. It is expected that tasks such as the fine adjustment of object manipulation would be performed by, or with, assistants since the user's hands are generally occupied. Haptic Interfaces
The term "haptic interface" is used in this specification to refer to all forms of interface between a user and a system that involves the sense of touch. In the present invention, it is preferable to use a device referred to as an instrumented glove to provide a user with tactile responses. Other devices that may be used include force feedback probes and three-dimensional object manipulators. Whilst gloves have been used previously in virtual reality systems the usual form of an instrumented glove is an output device only. However, in the present invention, it is preferable to extend the function of a glove to enable it to receive input signals to include force feedback functionality.
Force Feedback Probe
Preferably, the system also includes a force feedback probe. Such a probe should have six degrees of freedom and provide a user with the ability to select a planar image and a point on the image with force feedback to restrict the hand movement within the area of interest. Additionally, it would be preferred that a force feedback probe include user activated switches or buttons to allow users to implement combinations of actions.
Figure 8a details a typical force feedback probe that is considered suitable for the present invention and Figures 8b to 8e represent a range of alternative shapes and configurations for force feedback probes.
With an audio interface to the system capable of recognising spoken commands, switching between different visual representations of a force feedback probe could be effected in a manner similar to the practice in a surgical procedure when a surgeon instructs an assistant to provide him with a specific tool.
Object Manipulator
Figure 9 shows a preferred implementation of an object manipulator. Preferably, an object manipulator is provided which can also make use force feedback signals generated by the system. To manipulate a virtual object, the user may depress a button or activate a switch to move the handle of the object manipulator. Preferably, the object manipulator is also provided with a scroll wheel at the side of the handle to implement scaling functions. It is expected that the object manipulator would be capable of receiving spoken commands from a user. It is also expected that the user would be able to fine tune object manipulation by spoken commands.
Force Feedback Glove
An instrumented glove is considered to be a relatively natural way of providing a user with an input device comprising six degrees of freedom in a simulated or virtual environment. Figures 10a to 10c shows various views of the preferred configuration for a force feedback glove.
In the present invention, the glove preferably has both an input and an output capability. A magnetic tracker may be attached to the centre of the palm of the glove which may be considered by the system to be the rotational centre of the systems representation of the users hand. Preferably, the palm of the glove includes a switch or button, the activation of which is effected by closing the fingers of the users hand.
Additionally, it is preferable to include on the back of the glove relatively short, light weight cables interconnected with movable joints. The force feedback controller pulls or releases these cables to effect a sensation upon the wearer of the glove in accordance with the constraint required in light of the users interaction with simulated objects.
When using the glove, it is expected that all translation and rotation operations would be performed by the user's shoulder, elbow and wrist. Other than pressing the button on the palm, it is also expected that the smaller finger joints and muscle groups on the fingers would not utilized as users have very little control over movement of the small finger joints. Since the glove requires rotation to be made with the wrist, the elbow and the shoulder, its range of rotation is limited and pressing the button on the palm of the glove is a method of indicating to the system that a limit has been reached. Whenever a limit is reached, the user needs to disengage the manipulated object. This may be effected by releasing the button under the fingers, which would restore the hand to a more comfortable posture, and then recommence the manipulation (by re-closing the fingers).
The Force feedback controller manages the force feedback cables to enable user to feel the gesture in the manipulation. Cables are joined using mechanical joints that have a micro motor attached. When the user is holding on to a solid object or a tool, the composite joints restraint the hand from moving beyond the shape of the object.
Preferably, the system includes a pair of force feedback gloves thereby enabling the use of both hands. To zoom in and out of the object, the user can bring the virtual object nearer and farther with the same glove.
The various tools that are represented in Figures 8a to 8e may now feel differently. As was the case for the force feedback probe, preferably, the interchange between various tools can be effected by way of spoken commands.
Figure 11 illustrates an example of interaction between haptic, visual and audio interfaces. Figure 11 illustrates the visual representation for a user wearing two instrumented gloves with one hand grasping a tool and the other hand grasping a spherical object. In the example of Figure 11, the tool is brought into close proximity of the spherical object and a "tick" sound is played to the user to represent the tool and the object making contact in the simulated environment. The inclusion of audio capabilities in the system enhances the sense of reality that would otherwise be experienced by use of the haptic and visual interfaces alone. Interaction Manager
Traditional approaches to performing tasks with virtual objects in three- dimensional space are generally considered cumbersome.
In particular, due to a lack of adequate depth perception, users generally can't confirm whether a desired target has been tagged correctly unless they alter their viewpoint or rotate the entire virtual space.
In addition, users generally find it difficult to select a point on a very small target. Inevitably, human users naturally experience hand tremor that contributes to this problem.
Preferably, a selection of devices, or tools, of different shapes and configurations is made available to users with the expectation that choices as to the types of devices will improve the ability of users to perform tasks in a virtual environment.
As human users naturally combine different modes to achieve physical and mental tasks, it is preferable that the system of the present invention incorporate the facilities to support those modes of interaction between a user and simulated, or virtual, objects.
It is preferable that the system provide the user with the capability to produce or devise tools with a shape or configuration defined by the user.
Architecture
In the preferred embodiment, the Interaction Manager includes two sub-layers as detailed in Figure 12. In this Figure, the largest outer rectangle represents the Interaction Manager, where upper sub-layer represents the Advanced Interaction Manager (AIM) and the lower sub-layer represents the Elementary Interaction Manager (EIM).
EIM provides general services and various kinds of information to support AIM to perform more complex functions. AIM provides outer applications with a uniform interface, including a Command & Message Bus (CMBus). In a preferred embodiment, user developed applications or additional controllers may be installed by listening and posting commands or messages via the bus as working in the event-driven mode.
Elementary Interaction Manager (EIM)
The EIM performs relatively basic tasks. It receives user input whilst also providing various devices with signals thus giving force feedback information to users.
In the preferred embodiment, existing libraries are used such as OpenGL and DirectX for the graphics libraries.
Several modules in the architecture function as multimedia engines. The Rendering Engine, Force Maker and Sound & Voice Synthesiser modules provide output signals to the user whereas the Motion Analyser, Gesture Analyser and Voice Command Recogniser receive information via respective input media and abstract them into a form that other components of the system can readily recognise and use.
In the preferred embodiment the Rendering Engine is able to supply primitive commands for rendering polygonal or model based virtual objects such as volume texture based virtual objects. Upon accepting a primitive command, the Rendering Engine will decompose it into several smaller commands that can be performed by invoking subroutines in the Graphics Library. The Force Maker and Sound & Voice Synthesizer modules operate on a similar basis.
In the preferred embodiment, the Gesture Analyser receives spatial data associated with several fingers in one hand, and determines corresponding gestures from them. No semantic information related to gesture is determined by the Gesture Analyser. The Voice Command Recogniser operates on a similar basis to the Gesture Analyser. However, in the preferred embodiment, the Motion Analyser operates differently as compared with the Gesture Analyser and Voice Command Recogniser. The Motion Analyser only processes raw spatial data from reach-in devices, such as instrumented gloves, and transforms that data into representative movements in virtual space. In addition, the Motion Analyser filters the input data and analyses motion direction and speed.
Advanced Interaction Manager (AIM) In the preferred embodiment the AIM comprises three components, these being a Command & Message Handler (CMH), a set of Controllers and a Default Virtual Environment Conductor.
Preferably, a message is a package including three fields, namely a data field, containing spatial data originated from the Motion Analyser, a sender field and a recipient field. It is also preferred that the command is also a package including a code field related to a service to be requested.
The Command & Message Handler can receive and analyse data supplied from the EIM and repackage them into messages or commands by using Default
Predefined-Command Database or User Defined Command Database. The data from the Motion Analyser is converted into messages while those from the
Gesture Analyser and Voice Command Recogniser are converted into commands.
Preferably, the Default Predefined-Command Database contains records defining a relation between gestures, or words, and commands. Moreover, an action as a gesture sequence, or a sentence can also be matched with a command. In the preferred embodiment the User Defined Command Database contains user defined commands representing the users favourite gestures or expressions. Preferably, user-defined relations have a higher priority as compared with predefined relations. The prioritisation of relations acts to resolve conflict when one gesture, or one expression, is related to two commands defined in the Default Predefined-Command Database and User Defined Command Database respectively.
Preferably, a default application is provided such as the Default Virtual Environment Conductor which constructs and maintains a virtual three- dimensional environment in which all other applications work. It also provides several basic and necessary routines such as managing three-dimensional control panels or opening data files of a default type or any similar utilities. In particular, it is able to supply several virtual tools such as a metal-detector, 3D- point-picker, 3D-scissors or various kinds of probes with different shapes and function for a user to study virtual objects. Furthermore, it should have privilege to control all controllers and applications.
User-developed application and additional controller
The architecture of the preferred embodiment conveniently provides for a user to characterise a virtual world by attaching their applications or their additional controllers into the CMbus. The user-developed application or controller has the same status as the default applications or controllers.
In this respect, the term "application" refers to a program that is to perform specific functions, whilst the term "controller" refers to a program that performs general functions.
For clarity, the following example is provided to illustrate how a user-developed virtual tool operates inside the Default Virtual Environment Conductor. Firstly, a user should determine what services it can supply and implement those services in an event-driven mode. Then, some specific codes should be added to a system configuration file for registering the controller. Thus, during the AIM initialisation, the Default Virtual Environment Conductor will open the system file and check if those registered controllers and applications are available, and then broadcast a message to allow them to initialise themselves. After initialisation, the new controller like others will be ready to serve any request. Functionality
In the following sections, functions supported by Interaction Manager for the performance of commonly required tasks are described.
Reach A Position
Selecting a position of interest on an object can be a difficult task. Preferably, the Interaction Manager provides the following facilities to assist the user with the performance of this task.
In the preferred embodiment, the Motion Analyser includes a filter to reduce that component in the input signal due to human hand vibration. The filter may be calibrated to determine intrinsic parameters by assessing the hand vibration of a user. The filter is effectively a low-pass filter for signals in the three dimensions.
However, in the present invention, the determination of the threshold frequency for the filter is important. In the preferred embodiment an intuitive procedure is implemented to determine the threshold frequency.
Firstly, assuming that a position calibration for the reach-in device has been conducted, the user is instructed to move the device along a displayed curve and hence forms a trail representing the users attempt to follow the curve. The actual trail of the device is displayed and recorded. The average difference between the curve and trail is computed and used to determine a first threshold frequency for the low pass filter. The filtered trail is compared with the curve and the average difference is determined. If the difference is not within an acceptable limit, the process is repeated to determine a new threshold frequency. The process continues until an acceptable average difference is reached.
An optional component, Target Finder, is also provided for any picking tool designed as a knowledge-based agent who can accept voice commands. A set of commands associated with Target Finder are also defined so that the Command & Message Handler can co-operate by correctly dispatching those commands recognised by Voice Command Recogniser to it via the CMBus.
Figure 13 shows a hierarchical representation of the functions involved in the transmission and reception of commands in the IOEES.
For simple commands, no analysis or deduction is generally needed.
However, for more complex commands, reference is required to a domain knowledge database and a three-dimensional model of the human body.
A three-dimensional snap function provides an efficient object picking technique in three-dimensional modelling. Nevertheless, when a three-dimensional reach- in device is used, the device calibration becomes a relatively difficult problem. Two methods are proposed to overcome this difficulty.
The first method involves adding an offset vector to positions conveyed from the reach-in device. For this method it is assumed that device calibration has been performed successfully before any snap operation (ie a transformation Tcaι acquired from the calibration will transform positions from device space into virtual space). The device space is defined as a space where the reach-in device moves and the virtual space is defined as a space where a virtual environment is imagined to be. If there is no snap operation, the relation between the two spaces is as
Figure imgf000028_0001
• Pdev, where vir is in virtual space and Pde is in device space. When the snap operation is available, the relation is altered as PVir=T • Pdev+V, where V is so called offset vector.
The following description is a more detailed explanation for the method. For the purpose of the description we assume that an offset vector Vj has been well defined in the ith snap operation. During the (i+1)th snap operation, when the reach-in device is moved onto a position Pdev, the virtual stylus is displayed as moving to the corresponding position PVir=Tcaι • Pdev+Vj. Next, a position Ptag nearby the position Pvir is tagged by various snap rules, such as locating the nearest grid corner, to find nearest a point on a certain edge or surface. After that, a vector V as Ptag-Pvir is added to Vj. The result is the desired new offset vector Vj+ι as V'+Vj. So, Vj+ι will substitute Vj in the transformation as vir- cai • dev+ j+1 until next snap operation. On the other side, the virtual stylus is displayed as jumping onto the position Ptag.
The second method can be used where a high level of realism is required. When a reach-in device is moved to a certain place, the user will observe the virtual stylus moving to the same place. The spatial information for the device from the haptic sense and vision are coincidental. Therefore, the former method, moving the virtual stylus without moving the reach-in device, can't be exploited here. The fact that it is not necessary that the virtual stylus' pinpoint exactly point to those positions that may be selected is helpful for us to implement the operation in such a situation. When the tagged position is determined as in the first method, the virtual pinpoint isn't moved onto it. Instead, the tagged point will blink or highlight to show that it has been selected. This also means that the pinpoint doesn't coincide with the actual selected point, and there is a small distance between them. Meanwhile, there is a force exerted on the device, which is well elaborated to facilitate the user to move the device to eliminate the distance. It is much like a force of attraction from the tagged position.
Force Computation Based on Potential Field
The Default Force Feedback uses a model based on potential field to generate resistance. It can be described as follows. If the scalar per every voxel is considered as a value of potential, the volume data set can be equivalent to a discrete potential field.
Furthermore, if the potential field is an electric field, a force should be exerted on any object with electric charge. The intensity of force is in proportion to the gradient of the potential field on the position where the object is located. The force direction, along the gradient direction or its opposite direction, is dependent on which kind of electric charges (positive or negative) the object possesses. In the present situation, the potential value is also a factor contributing to the force. More generally, the force can be defined as a function of potential gradient and potential value. Thus, by computation of gradient vector on a position in volume data, a force vector can be completely determined and a force can be exerted on the haptic reach-in device. Such a force can be exploited to guide the user to explore and study three-dimensional virtual objects.
With reference to Figure 14, in the preferred embodiment, spatial information about a haptic device is measured by procedures in the Haptics Library and is then passed to the Motion Analyser and the Command & Message Handler. The spatial information is communicated to the Default Force Feedback Controller by way of the Command & Message Bus, together with other control information.
With respect to the "Resistance Computation based on Potential Field", a Finite Element Method (FEM) mesh model of the interacting tool is retrieved from an FEM mesh library. The FEM mesh model represents the shape and physical properties of the virtual tool being selected. The gradient vector of the potential field on each vertex in the model is determined from the following relationship: i = V(Kernel * E potential ) _ p
In this relationship, "Kernel" refers to a three-dimensional filter function. A convolution operation between the Kernel function and the potential field Epotentiai is performed as an anti-alias means in order to obtain a smooth result. Theoretically, the entire resistance fpf is an integral of dt pf which acts upon an infinite element in the virtual object. Since the Finite Element Method is employed, the above integration is simplified as a sum of contributions from each finite element. To reduce the computational requirement, only the surface mesh is considered in the preferred embodiment. Hence, each finite element geometrically corresponds to a mesh. The element is depicted by a set of vertices and a set of interpolation functions. The resistance encountered by a single element k is determined by the following relationship: AFk = j$ak (u,v)[nk (u,v) - gk (u,v)lμ k (u,v)dudv ,
In this relationship, ήk(u,v) \s a unit normal vector, (u,v) are local parameters on the mesh in consideration, and ak(u,v)\s a physical property. Since the element k can be described by ρk(u,v) = ∑f>k jψj(u,v) where w (u,v) is the jth
interpolation functions and pk is jth vertex of the element k,
Figure imgf000031_0001
In practice, n4(«,v) may be considered to be a constant vector, approximately nk , ak(u,v) may also be considered to be constant ak and g*: (M > v) = ∑ g*j Φj (u,v) where φj(u,v) is the jth interpolation function. Therefore,
J
Furthermore, the total resistance force based on a potential field may be determined as:
f Pf = Σ k Δf*
The computation of the torque exerted on the virtual tool by the potential field is determined in a similar procedure as the force. The torque on element k is
Figure imgf000031_0002
where η, j = (jj t (u, v) j (u, v)dudv , and fk (u, v) = f lφt (u, v) refers to a vector
from the point pk(u,v) to ό*., the center of mass of the element k, and
The total resistant torque is determined as the sum of all the ΔM* .
When resistance based on the potential field has been determined, other forces or torques, such as the constraint force from "Ghost & Real Probes" are also considered. After the total force F and the total torque M are determined, they are provided to the procedure "Force Maker", which invokes functions from Haptics Library to provide control signals to the haptic device to generate force and torque sensations.
Upon executing such a procedure, collisions or contact between the virtual tool and a volume object can be simulated. Since g will be greater on the boundary of a volume object according to the development of the potential field, greater force feedback will be sensed when approaching a boundary.
Using Audio Effect to Compensate Haptic Sense We use the Default Sound Effect Controller to simulate the sound when a collision occurs. It is difficult to implement a haptic interface to let user sense the material property such as stiffness. Therefore, the playing audio effect can be used to compensate or enhance such a sense. In Default Sound Effect Controller, the shapes and the material properties for the virtual probe and virtual object are taken into account during the sound synthesis. Moreover, the approaching speed of the probe is another important factor that influences the sound of a collision.
The audio response provided by the potential field in the preferred embodiment may be described as follows. The input generated from a potential field for the audio device is a scale value. Hence it can be well defined in relation to the distance to a target structure. For example, when a device moves away from a target structure, the volume of the audio output is determined to be proportional to the distance p re-calculated in the potential field. In many cases, people may also deploy different sounds to represent the distance. Direct commander distance broadcasting can also be employed for the awareness of a user. A specified coefficient should be used for the purpose such as:
Voice = vd
The coefficient v can be customized depending on the application.
Constraint Display Technique
In situations involving manipulation of a virtual object, a constraint to the reach-in device should be provided.
Since visual information strongly influences haptic perception, it is preferable for constraint rendering that a specific visual hint for a constraint be provided to accompany the use of a tool. A technique referred to as "Real & Ghost Probes" is employed in the Default Force Feedback Controller to implement such a visual cue.
Two probes are employed with the first probe (real probe) used for tracking and the other probe called (ghost probe) is displayed as a visual cue to suggest to the user what to do or where to go. However, the ghost probe usually is invisible, or in other words, it seems hidden within the real probe unless some application- based condition has failed and the respective constraint is to take effect. The Default Vision Controller will support Default Force Feedback Controller by performing the task as probe rendering while Default Force Feedback Controller takes control to determine the behaviour of the "ghost probe". The following example is provided to detail how Default Force Feedback Controller makes such determination. When a user traces a certain virtual path with a reach-in device, some module will be able to continually give suggestions beforehand about which position and which direction the device should move. Then, the Default Force Feedback Controller will refer both these motion suggestions and the current spatial information of the device to determine whether the device diverges from the desired path. If it does, the Default Force Feedback Controller will render the ghost probe visible, move it to the suggested position, and then keep it there, while resistance is applied to the reach-in-device forcing it toward the ghost probe. From the user's perspective, it appears that the ghost probe splits from the real probe. Meanwhile, the user will feel less resistance encountered when moving the real probe toward the ghost probe. When the real probe is moved sufficiently close to the ghost probe, the ghost probe will re-merge with the real probe and disappear.
Virtual Tools
If a user handles an interactive device, the feeling is somewhat physical. If user wears a force feedback glove, such feeling would come from a simulated haptic rendering. Thus a virtual tool is independent of the physical device. Accordingly, the existence of a virtual tool only depends on whether user can sense it and has nothing to do with whether there is an actual device associated with it.
When only the force feedback glove is available, a user can "grasp" a virtual tool with a certain gesture while the respective haptic sense is fed back to the fingers and palm. Moreover, a user should be able to feel what has occurred from his fingers when the "grasped" tool touches or collides with other virtual objects. Since these tools are only in virtual space, they could be designed with any shape, configuration or characteristic.
For example, an ideal tool for exploring cells may include a pen-like shape with a thin long probe at one end. It's also expected to have some intelligence to help a user find targets of interest, as it's able to guide the user by means of voice, vision and/or haptic sense as well as accepting user spoken commands.
Naturally, the Target Finder, the respective techniques for three-dimensional snap and "Real & Ghost Probe" as discussed previously can be integrated to support such a virtual tool. From this perspective, the virtual tool can be regarded as a combination of a group of relative functions. Therefore, the concept provides a means for a user to recombine various features to a few convenient and powerful tools that he requires. Several default virtual tools are provided relating to contour detection, point tagging, shape modification and deformation. However, the system provides application developers the ability to define their own tools.
Extractor
In the preferred embodiment, a hierarchical contour model is used to represent an object extracted from volume image data. This model can be further converted into other geometric models such as a surface model or a mesh model. The Extractor includes three different modes of operation, namely Tracking Mode, Picking Key Points Mode and Guided Mode. In the following sections, the architecture and implementation of the Extractor will be discussed. The role of the Extractor in operational procedures is subsequently described.
Virtual Tools
Preferably, the Extractor includes two virtual tools referred to as the Cutting Plane Probe and the 3D-point-picker. In their implementation, several previously described techniques or functions such as 3D snap, Target Finder, "Ghost & Real Probe" and audio effects are all integrated to facilitate the interactive extraction.
The Cutting Plane probe is preferably has the shape of a magnifying glass, namely, a handle with a loop at one end. As a result, it is a tool with six degrees of freedom. Furthermore, the junction between the handle and the loop is preferably rotatable, thus enabling the loop to rotate when some virtual obstacle is encountered or it's attracted or driven by some force in the virtual environment. In the Extractor, the cutting plane technique has been exploited to control such rolling.
Moreover, the shape of the loop of the Cutting Plane probe can be deformed to suit the shape of any closed planar curve. Usually, the loop is a circle with a user definable radius.
The 3D-point-picker includes the Target Finder as a component and thus it is a virtual tool that responds to spoken commands. Furthermore, if some domain knowledge database is available, the picker will have more data from which to help the user. Certainly, it is preferred to integrate the constraint display and other facilities for 3D-point reaching.
Architecture and Implementation
The following paradigm describes the Extractor, where the Editor acts a co- worker receiving the Extractor's output and providing the Extractor with a respective parametric object model.
Except for the module R/S, which receives and sends messages, there are four modules within the Extractor. The Detector module forms the core and others are its peripheral parts. Each peripheral part is responsible for handling a respective working mode while the Detector processes those tasks as contour detection submitted from the peripheries.
Detector
The function of the Detector is to extract a contour on a cutting plane that is determined automatically. The cutting plane is determined from a known two- dimensional image processing technique, the details of which are not described in this specification. The input includes a closed curve and a supporting plane. The steps of the algorithm are illustrated below where the cutting-plane technique mentioned before is used.
(a) Two specific parallel planes (fi and f2) are determined according to the input which meet the following requirements. They should transect the desired object.
They are to be parallel to the supporting plane and locate in each half spaces of the plane respectively. The distance between them should be relatively small as compared with the enclosing circle of the closed curve.
(b) The volume data is projected or sampled onto fi and f2 respectively.
(c) In each plane, a 2D contour detection algorithm is implemented. Two central points (bi and b) are evaluated. A vector is then determined by connecting the two points while the dot product between the vector and the supporting plane's normal should be positive.
(d) A plane, referred to as the cutting-plane, can be determined with the vector and the middle point m between the two central points (bi and b2). The cutting-plane is completely determinable by the two conditions such as the vector is its normal and the midpoint m is on it. The method from (a) to (d) for determining cutting-plane is so-called cutting-plane technique.
(e) The contour on the cutting-plane is extracted and corresponding parameters are yielded.
In the Detector module, the potential field model plays an important role as it acts to exert an attractive force on the deformable model which is placed on the cutting-plane to extract the desired contour.
Initially, a deformable model, which is geometrically a closed curve and includes various mechanical properties such as elasticity and rigidity, is arranged to enclose the object on the cross section. The forces calculated based upon the potential field modelling will attract the model to approach the boundary of the vascular object. Since the model is deformable, both the shape and location of the model will be influenced. When the force of attraction and the internal forces resulting from the propertied of elasticity and rigidity of the model are in equilibrium the model will not be deformed any further and at that stage it should correspond to the contour of the cross section of the vascular object.
Tracker
Tracker is to track user-controlled reach-in device, and is responsible for reconstructing a sweeping volume to catch user-desired object in volume data.
The Tracker records the spatial information of the CP-probe while incrementally constructing the sweeping volume comprising a series of closed curves and checks the validity of the volume when user's input isn't correct. Once it detects some invalid input, it will inform the Cutting Plane probe and the latter will let the Default Force Feedback Controller start up the corresponding constraint display as using "Ghost & Real Probe".
As for the constraint, the Tracker is only required to determine the next position to which the probe should move, and feed back the expected position and trend to the Cutting Plane probe. It is preferred to use a method referred to as "2- steps-forecast" to evaluate the expectation. At first step, the Tracker will exploit a curve to locally fit the trail of the moving loop's centre just around its latest position so as to estimate the position and trend. At the second step, the estimate information will be passed to the Detector with other required parameters, where a more accurate evaluation is effected and the result, a position where the loop's centre should move, is fed back to the Tracker. One the other side, the Tracker will pass those closed curves to the Detector continuously while accepting and rendering the extracted contours.
Explorer
The Explorer reconstructs a surface or a curve passing through several points given by a user while invoking the Detector to extract contours near the surface of the curve. The surface is expected as an approximate representation of the user-desired object's surface and the curve is as an approximate skeleton line of the object. In the following sections, a description of the Explorer working with skeleton line is provided.
The Explorer plays the role of a pathfinder after accepting a list of points, denoted as key-point-list (KPL). It will then attempt to explore the path the user desires. Namely, the Explorer must interpolate several points or a curve between every two adjacent KP's. A relatively simple method for this is linear or polynomial interpolation.
Figure 14a shows how the Explorer works. The elements in KPL is noted as q°, q1, q2... qn. At the beginning, Explorer gets the first KP q° from KPL and lets p0 = q°, where the pi (i=0... m) stands for the interpolated points between q° and q1. Tentatively, it has already acquired a radius R0 and a vector Vo- The radius defines the maximum length for one step Explorer takes. The vector defines a direction, along which Explorer will step. Then, Explorer tries to predict the next position p-T, which is almost along V0 and is away from p0 with distance less than Ro. Moreover, the tangent V of desired path on position p-i' will be evaluated together with R-i'. The predicted triple (p-i', Vi', R/) from the Explorer then are passed into the Detector. The Detector will yield a triple (pi, Vi, R-i) expected as a precise version to the input, where pi is on skeleton line and close to pi', Vi is the tangent vector of skeleton line on pi and Ri is the radius of the minimum enclosing-circle for the extracted contour around pi. Then, the new triple returns to Explorer. At that time, Explorer treats pi as current position and (p-i, V-i, Ri) is ready for the next step adventure. In the same way, it will advance and output a list of triples (pi, Vj, Rj) (i=0...m) representing the skeleton line between q° and q1 until reaching or being very close to q1. Thereafter, each path between q-, and qι+ι (i=0...n) can also be explored in the same way.
The method for initial triple (p0, V0, Ro) is given as below. As we compute the precise triple, a predicted triple for q° should be calculated firstly. Then the predicted triple will be refined in Detector and the result is what we desire. The simple algorithm is shown below.
• Letting p0 -q°, V'=(q1-q0)/|| qι-qo||. • Using a region-growing algorithm to estimate the radius the enclosing circle for the contour around the q°, where the radius is Ro'.
• Calling Detector with (po'.Vo', Ro') as parameters. As usual, Detector will output a triple (p0, V0, Ro).
In the Figure 14b, bi, b2 and m are the auxiliary positions for Detector refining. Reader can refer to the section about Detector.
There are two alternatives to predict the next position. Here we assume that ps is available on the path between q° and q1, and pι+ι is the desired position.
(a) Let pi+1' = i + u R, Vj, when || qi - p, || > U RJ; If || qi - Pi || < u-R|, let pi+1' = q-i, where u is constant between 0 and 1. ( shown in Fig. 3.6 )
(b) Let p,+ι' = pi + u-Ri-(Vj + v), when || q^ - pi || > u-R|. If || qi - pi || < u-Rj, let Pι+ι' = qi, where u is constant between 0 and 1, and v is vector and is perpendicular to Vj. The length of v is less than u-Rj. Since v is uncertain vector, Pi+i' is uncertain subsequently. Then instead predicting one determinable position, Explorer generates a set of possible positions. In fact, Explorer chooses a v at random firstly and validates it to see if there is no surface across the path between pj+ι' and pj. If it is right position, then no further computation Explorer will take. Or else, it will choose another v randomly to try. Another validation is that pi+i' must satisfy the condition || Pi+i' - pj || < ε when || pi+1' - p0 || >= || q1 - po || where ε is a relative small constant.
Discreter The Discreter plays the role of an advanced converter from parametric object model to discrete object model as the contour model. It is a bridge across the Extractor and Editor as observed in Figure 15. It processes a model loader and a model converter internally. There are two methods to pass a parametric object to it. The first is by way of the Editor and the second is by importing existing models directly from an external file. The parametric object models are regarded as shape hints in the Discreter and can be exploited to aid in the extraction. If such a guide is available, it will assist the exploration of the desired object in volume data where the awkward problem of branch selection would be avoided.
Editor
The Extractor and Editor construct a loop for the incremental extraction. There are also several loops in Editor to provide various ways to modify the extracted result or shape hint. Thus, it can work firstly to feed the Extractor a shape hint to guide its extraction and secondly to work as a post-process to refine or trim objects determined by the Extractor.
The Editor consists of three parts, namely, a Modeler, a Selector and an Adjuster.
The Modeler constructs a parametric model based on a spline technique from a discrete model consisting of a series of contours. There are two alternatives as spline-interpolation and spline-fitting to accomplish the task. The Modeler works automatically and no interaction is required.
The Selector is responsible for the selection of several key points either automatic or interactively. The key points are referred to as those points scattered on important positions in the parametric model. In interactive mode, the user is also expected to pick those points that are around some lost detail in the parametric model, as it would greatly assist the refinement procedure.
The Adjuster manages the user's interaction and provides the user with the 3D- point-picker to control those selected key points. The picker can be used to access and control any key point with less effort as described in the section "Guided Mode". Thus, the user can alter the shape under a certain constraint by adjusting the key points. Example Operational Procedure
In this section, the use of an Incremental Extraction and Edit System is described with reference to the extraction and editing of a three-dimensional object from volume data.
A three-dimensional volume is loaded and displayed along side with audio and haptic controls. The first step involves a preliminary extraction of a three- dimensional object from the volume data. The user then refines the extracted model with tools provided in the Editor module. The user could output or save the model when he or she is satisfied with the results. There are three modes in the extraction method. We will discuss how Extractor and Editor work to pursue their goal, aiding user to extract and edit object in volume data.
Extractor
The Extractor provides a user with three interaction modes such as Tracking Mode, Picking Key Points Mode and Guided Mode. Under each mode, the user is expected to give the Extractor a shape hint in a certain form of his desired object by use of one, or several, corresponding virtual tools. Thus, we put emphasis on the two questions as what form should be given and how to give the form.
Tracking Mode
This mode is intended to provide a user with the virtual tool CP-probe and expects the user to catch a desired object in volume data. The entire motion will be tracked by the Extractor which will then be exploited as a clue to benefit the extraction.
When the Extractor is working in Tracking Mode, the user can move the CP- probe freely before he selects a target object. At the same time, he can reshape the loop. Then, he is expected to drive the probe to sweep a volume where the target is almost fully contained. Meanwhile, a series of contours is displayed approximating the surface that is the nearest surface on the desired object to the sweeping volume. Therefore, if there is any cavity in the object, it cannot be extracted unless the user sweeps another volume that encloses it sufficiently closely. During the sweeping operation, a constraint will aid the user to perform the sweeping more accurately, as to warrant the volume almost enclosing the "target" that is perceived by the Tracker.
Picking Key Points Mode
In this mode, the user is expected to handle the 3D-point-picker and pick some key-points (KP) scattered on some shape hint of the desired object, such as skeleton or surface or even a volume.
If the KP's spread on a surface, the Explorer will reconstruct a polygonal surface from those KP's, and then slice it up with a stack of parallel planes. The intersection, as piecewise lines closed or unclosed on each section, will be transferred to the Detector. The Detector will attempt to extract those contours, each of which is the nearest to a certain piecewise line respectively.
If the KP's follow along a skeleton line (central line), the Explorer will attempt to trace the line. When dealing with a vessel like object, picking points along the central line is the most convenient. For an object with no cavity, its skeleton line is determinable and unique. To extract these objects with given KP's along their skeletons, the Explorer can do more for a user since these objects are simpler in topology.
After a user has finished picking points, the Explorer can operate automatically. Nevertheless, the degree of automation of the Explorer depends upon the quality of the KP's. In other words, if the specified KP's are unable to define a shape hint well, the Explorer pauses and prompts the user for further instruction. The instructions to a user may include questions on what to do and how to do it, or to suggest that the user input some parts of the shape hint again. In this mode, the user need only pick some points on a surface or along a skeleton. Thus, the 3D-point-picker plays a very important role here since it can directly influence the efficiency and effectiveness of such an operation. The techniques for reaching a position are well established such as calibrating a filter to reduce spatial data noise, 3D-snap and intelligent Target Finder. Two examples for point approaching have already been described in the section entitled "Reach A Position".
Guided Mode
In this mode, a user inputs a curve or surface as a shape hint directly instead of a number of key points. Based on the b-spline modelling technique, it is relatively simple for a user to model a shape hint such as a skeleton or surface. More conveniently, a user can import some existing shape hint directly. These shape hints are referred to as guides.
At this stage, the Extractor will perform registration between a guide and the user-desired object. There could be different registration methods for different guides. When a curve is considered as a skeleton-guide, the registration method involves adjusting it to be contained in the object and being able to represent the skeleton well. When a surface is regarded as a surface-guide, the registration deforms it according to the shape of the desired object while a metric evaluates the difference between them. When an image-based model from a certain atlas is input as a guide, a seed-point-based registration will be executed. After registration, the Discreter module will sample that curve or surface, then provide the required parameters to the Detector similar to the Explorer. The difference is that the Discreter will encounter less difficulty under the guide when it tries to find the next position where the contour should be extracted. This is because there is no possibility for the Discreter to lose its way during such an exploration.
In modelling and registration, the 3D-point-picker can aid a user to perform various tasks. By utilising the constraint display technique and voice commands, the picker is expected to be a convenient tool for modelling such guides. For example, if a user intents to design a curve to represent the skeleton of an object. In this instance, the user is expected to input several control points to model a curve. The user can give some voice commands to control the curve shape such as altering spline bases geometric continuity and adjusting and jiggling control points. During the process of adjustment, the user will perceive haptic sensation. These haptic sensations are the result of the synthetic effect of multiple forces, such as elasticity of the curve and artificial repulsion based on potential field theory. The force originating from the constraint that the curve should be a skeleton line also adds to the haptic sensation. These sensations are useful for the user to work with three-dimensional modelling. It should be noted that the tool could also be exploited in any situation when editing is required.
In Guided Mode, the result may be more accurate and more efficient than that obtained from Picking Key Points Mode, however, a guide must be modelled or available beforehand. Another way of providing guides while avoiding or lessening the burden of modelling is to use either Tracking Mode or Picking Key Points Mode to generate a contour model. As an initial model, it is submitted to the Editor participating in some editing or refining loop. A parametric model of the same object will be generated by the Editor, and then it is regarded as a guide for the Extractor where the extraction in Guided Mode will be executed again. The Guided Mode enables users to perform a progressive extraction.
Editor
After the extraction phase, the user will have an option to start the Editor inside the Extractor or start it outside the Extractor where the user needs to acquire the contour model manually. A parametric model is constructed and rendered. The user then chooses the mode to select key points. If the automatic mode is chosen, several key points, which can keep the shape of the model, are selected and highlighted. They are now ready for the user's adjustment. If user selects the interactive mode, the user is required to tag on some points as the key points on the parametric model.
Once a key point has been selected, the user can then move it immediately. It is suggested that he should add some key points near some details that haven't been extracted or have been extracted incorrectly. Such types of key points are known as detail key points. After adjustment, a new parametric model that has interpolated the key points is reconstructed. If the user requires a refinement, he should initiate the Extractor with the new parametric model.
In the Guided Line Mode, the Extractor will establish a new contour model corresponding to the parametric model. Therefore, the lost or imperfect detail would be extracted if respective detail key points were given during the key point selection. As previously described, the process is a progressive refinement loop for extraction and edit. During the process, the user may obtain the result at any time.
Workflow
The basic workflow is outlined in Figure 16. In this Figure, the Interaction Manager is omitted for convenience. The discrete model (contour model) is first passed to the Editor as input. This model is then flows to the Modeler (B). Afterwards, a parametric model, which is a result of either an interpolation or fitting method, is passed to the Selector (C). The Selector includes three modes such as automatic mode, interactive mode and simple mode, which can be chosen by the user. The selected key points will be conveyed to the Adjuster (D). After adjustment, those key points with new positions are returned to the Modeler. Then, processes B, C and D are executed again. Thus, the processes B, C and D form a loop, called as basic editing-loop.
Except for the first basic editing-loop, the user has a right to choose the simple mode to re-use KP's selected in the previous editing loop to avoid reselecting them again. In the simple mode, the Selector simply conveys the parametric model and those KP's selected in the previous editing loop to the Adjuster.
Besides the basic editing-loop, there is another loop consisting of the processes B, E, A and B. In this loop, the module E is the Extractor working in the Guided Mode. Since the Extractor performs better in Guided Mode, the loop may be considered as a progressive refinement. This loop is referred to as the refining- loop, and may invoke any combination between the refine-loop and basic editing-circle, such as the sequential process steps BCDBEAB, BEABCDB, BEABCDBCDB and so on, thus forming a complex editing-loop.
Finally, It will be appreciated that there may be other modifications and alterations made to the configurations described herein that are also within the scope of the present invention.

Claims

Claims
1. A method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the association of physical properties with identified objects said properties including at least visual and haptic properties of the identified objects; and incorporating said identified objects and associated physical properties into a system including at least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting with the simulated three-dimensional objects, or any part thereof, said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the least one haptic interface device in accordance with user requests.
2. A method according to claim 1 wherein during interaction between the human user and the simulated objects ancillary visual information is presented to the user.
3. A method according to claim 1 wherein the method includes the association of audible properties with identified objects, or parts thereof, and the system includes at least one audio interface device.
4. A method according to claim 3 wherein during interaction between the human user and the simulated objects ancillary audio information is presented to the user.
5. A method according to claim 1 wherein the at least one haptic interface device includes the facility to receive signals from the system said signals being in accordance with the associated haptic properties of the simulated objects, or any part thereof, that the user interacts with.
6. A method according to claim 1 wherein the interaction between the various interface devices of the system is co-ordinated such that the correct registration of physical properties with the identified objects is maintained for the various interfaces during interaction between the simulated objects and the human user.
7. A method according to claim 1 wherein the method includes the step of editing the representation of a simulated object in order to refine the representation of the simulated object derived from volume images.
8. A method according to claim 7 wherein the step of editing the representation of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user.
9. A method according to claim 8 wherein the step of editing the representation of a simulated object includes the use of at least one audio interface device by the user.
10. A method according to claim 1 wherein a discrete model of identified objects is derived from volume images.
11. a method according to claim 10 wherein the method includes the step of editing the representation of a simulated object in order to refine the representation of the object derived from volume images, the method also including the application of an interpolative scheme to produce a continuous model from the discrete model derived from volume images.
12. A method according to claim 11 wherein the step of editing the representation of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user.
13. A method according to claim 12 wherein the step of editing the representation of a simulated object includes the use of at least one audio interface device by the user.
14. A method according to claim 11 including an iterative process whereby an edited version of an object is compared with a previous discrete model of the object to determine the difference between the edited version and the previous discrete model of the object and whether the difference is within an acceptable limit, said edited version being converted into a discrete model for the purpose determining the difference with the previous discrete model.
15. A system for interacting with simulated three-dimensional objects, the system including representations of three-dimensional objects identified from volume images, said representations including physical properties associated with each of the objects relating to at least visual and haptic properties thereof, and at least one visual interface device and at least one haptic interface device enabling a user to interact with a simulated three-dimensional object said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
16. A system according to claim 15 including the provision of ancillary visual information to a user.
17. A system according to claim 16 wherein the physical properties associated with each of the objects, or any part thereof, includes audible properties and the system also includes at least one audio interface device and the generation of signals by the system for transmission to the at least one audio interface device in accordance with the audio properties associated with the objects, or any part thereof.
18. A system according to claim 17 including the provision of ancillary audio information to a user.
19. A system according to claim 15 wherein the haptic interface device is capable of receiving signals from the system corresponding to the associated haptic properties for any simulated object, or part thereof, said signals conveying a haptic sensation to the user as a result of interacting with the object.
20. A system according to claim 15 wherein the system includes a voice recognition facility capable of receiving and interpreting the spoken requests of a user thus enabling the user to issue commands to the system without necessitating the use of one or both of the users hands.
21. A system according to claim 15 including the capability to co-ordinate the interaction between the various interface devices such that the correct registration of physical properties with the identified objects is maintained for the various system interfaces during interaction between the simulated objects and the human user.
22. A method of interacting with simulated three-dimensional objects, the method including the steps of the identification of three-dimensional objects from volume images; the development of a model to represent those objects; the association of physical properties with identified objects in the developed model said properties including at least visual and haptic properties of the identified objects; and incorporating said model and associated physical properties into a system including at least one visual interface device and at least one haptic interface device thus enabling the system to simulate visual and haptic properties of the identified objects, or any part thereof, to a human user interacting with the simulated three-dimensional objects, or any part thereof, said interaction including the generation of signals by the system for transmission to the at least one visual interface device in accordance with the physical properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
23. A method according to claim 22 wherein during interaction between the human user and the simulated objects, or any part thereof, ancillary visual information is presented to the user.
24. A method according to claim 22 wherein the method includes the association of audible properties with identified objects, or parts thereof, in the model, and the system includes at least one audio interface device.
25. A method according to claim 24 wherein during interaction between the human user and simulated objects, or parts thereof, ancillary audio information is presented to the user.
26. A method according to claim 22 wherein the at least one haptic interface device includes the facility to receive signals from the system said signals being in accordance with the associated haptic properties of the simulated objects, or any part thereof, that the user interacts with.
27. A method according to claim 22 wherein the interaction between the various interface devices of the system is co-ordinated such that the correct registration of physical properties with the developed model is maintained for the various interfaces during interaction between the simulated objects and the human user.
28. A method according to any claim 22 wherein the method includes the step of editing the model of a simulated object in order to refine the model of the simulated object derived from volume images.
29. A method according to claim 28 wherein the step of editing the model of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user.
30. A method according to claim 29 wherein the step of editing the model of a simulated object includes the use of the at least one audio interface device by the user.
31. A method according to any claim 22 wherein the model of identified objects derived from volume images is a discrete model.
32. A method according to claim 31 wherein the method includes the step of editing the model of a simulated object in order to refine the model of the simulated object derived from the volume images, the method also including the application of an interpolative scheme to produce a continuous model from the discrete model derived from volume images.
33. A method according to claim 32 wherein the step of editing the representation of a simulated object includes the use of the at least one haptic and at least one visual interface device by the user.
34. A method according to claim 33 wherein the step of editing the representation of a simulated object includes the use of at least one audio interface device by the user.
35. A method according to claim 32 including an iterative process whereby an edited version of a model is compared with a previous discrete model of the object to determine the difference between the edited version and the previous discrete model of the object and whether the difference is within an acceptable limit, said edited version being converted into a discrete model for the purpose determining the difference with the previous discrete model.
36. A method according to any one of claim 22 wherein the model is a potential field model representation of identified objects.
37. A method according to claim 36 wherein the potential field model includes a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties therewith.
38. A system for interacting with simulated three-dimensional objects, the system including a model of three-dimensional objects identified from volume images, said model including physical properties associated with each of the objects relating to at least visual and haptic properties thereof, and at least one visual and haptic interface device enabling a user to interact with a simulated three-dimensional object said interaction including the generation of signals by the system for transmission to at least the at least one visual interface device in accordance with the visual properties associated with the objects and the reception of signals from the at least one haptic interface device in accordance with user requests.
39. A system according to claim 38 including the provision of ancillary visual information to a user.
40. A system according to claim 39 wherein the physical properties associated with each of the objects, or any part thereof, includes audible properties and the system also includes at least one audio interface device and the generation of signals by the system for transmission to the at least one audio interface device in accordance with the audio properties associated with the objects, or any part thereof.
41. A system according to claim 40 including the provision of ancillary audio information to a user.
42. A system according to claim 38 wherein the haptic interface device is capable of receiving signals from the system corresponding to the associated haptic properties for any simulated object, or part thereof, said signals conveying a haptic sensation to the user as a result of interacting with the object.
43. A system according to claim 38 wherein the system includes a voice recognition facility capable of receiving and interpreting the spoken requests of a user thus enabling the user to issue commands to the system without necessitating the use of one or both of the users hands.
44. A system according to any claim 38 including the capability to co-ordinate the interaction between the various interface devices such that the correct registration of physical properties with the model is maintained for the various interfaces during interaction between the simulated objects and the human user.
45. A system according to claim 38 wherein the model is a potential field model representation of identified objects.
46. A system according to claim 41 wherein the potential field model includes a physically oriented data structure for the representation of objects derived from volume images and the association of physical properties therewith.
PCT/SG2000/000101 2000-07-07 2000-07-07 A virtual surgery system with force feedback WO2002005217A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/SG2000/000101 WO2002005217A1 (en) 2000-07-07 2000-07-07 A virtual surgery system with force feedback
US10/332,429 US7236618B1 (en) 2000-07-07 2000-07-07 Virtual surgery system with force feedback

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2000/000101 WO2002005217A1 (en) 2000-07-07 2000-07-07 A virtual surgery system with force feedback

Publications (1)

Publication Number Publication Date
WO2002005217A1 true WO2002005217A1 (en) 2002-01-17

Family

ID=20428838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2000/000101 WO2002005217A1 (en) 2000-07-07 2000-07-07 A virtual surgery system with force feedback

Country Status (2)

Country Link
US (1) US7236618B1 (en)
WO (1) WO2002005217A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2894684A1 (en) * 2005-12-14 2007-06-15 Kolpi Sarl VISUALIZATION SYSTEM FOR HANDLING AN OBJECT
US7925068B2 (en) * 2007-02-01 2011-04-12 General Electric Company Method and apparatus for forming a guide image for an ultrasound image scanner
CN102646350A (en) * 2011-02-22 2012-08-22 上海理工大学 Centrum location device for virtual surgery force sense information acquisition
EP2365421A3 (en) * 2010-03-12 2015-04-15 Broadcom Corporation Tactile communication system
EP3092969A3 (en) * 2009-11-13 2017-03-01 Intuitive Surgical Operations, Inc. A master finger tracking device and method of use in a minimally invasive surgical system
US9901402B2 (en) 2010-09-21 2018-02-27 Intuitive Surgical Operations, Inc. Method and apparatus for hand gesture control in a minimally invasive surgical system
US10543050B2 (en) 2010-09-21 2020-01-28 Intuitive Surgical Operations, Inc. Method and system for hand presence detection in a minimally invasive surgical system
CN111202568A (en) * 2020-03-18 2020-05-29 杨红伟 Magnetic force feedback operation instrument of gynecological and obstetrical surgical robot

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018885A1 (en) * 2001-05-31 2005-01-27 Xuesong Chen System and method of anatomical modeling
DE10293993B4 (en) * 2001-09-03 2013-02-07 Xitact S.A. Device for simulating a rod-shaped surgical instrument for generating a feedback signal
US6985145B2 (en) * 2001-11-09 2006-01-10 Nextengine, Inc. Graphical interface for manipulation of 3D models
US7259906B1 (en) 2002-09-03 2007-08-21 Cheetah Omni, Llc System and method for voice control of medical devices
SE0202864D0 (en) * 2002-09-30 2002-09-30 Goeteborgs University Surgical Device and method for generating a virtual anatomic environment
WO2006057304A1 (en) * 2004-11-26 2006-06-01 Kabushiki Kaisha Toshiba X-ray ct apparatus, and image processing device
WO2006088429A1 (en) * 2005-02-17 2006-08-24 Agency For Science, Technology And Research Method and apparatus for editing three-dimensional images
WO2006121957A2 (en) * 2005-05-09 2006-11-16 Michael Vesely Three dimensional horizontal perspective workstation
US8717423B2 (en) 2005-05-09 2014-05-06 Zspace, Inc. Modifying perspective of stereoscopic images based on changes in user viewpoint
US7565000B2 (en) * 2005-11-23 2009-07-21 General Electric Company Method and apparatus for semi-automatic segmentation technique for low-contrast tubular shaped objects
US8379957B2 (en) * 2006-01-12 2013-02-19 Siemens Corporation System and method for segmentation of anatomical structures in MRI volumes using graph cuts
US9224303B2 (en) * 2006-01-13 2015-12-29 Silvertree Media, Llc Computer based system for training workers
EP1905377B1 (en) * 2006-09-28 2013-05-29 BrainLAB AG Preoperative planing of the position of surgical instruments
WO2008058039A1 (en) * 2006-11-06 2008-05-15 University Of Florida Research Foundation, Inc. Devices and methods for utilizing mechanical surgical devices in a virtual environment
US20090060372A1 (en) * 2007-08-27 2009-03-05 Riverain Medical Group, Llc Object removal from images
US11264139B2 (en) * 2007-11-21 2022-03-01 Edda Technology, Inc. Method and system for adjusting interactive 3D treatment zone for percutaneous treatment
WO2009067654A1 (en) * 2007-11-21 2009-05-28 Edda Technology, Inc. Method and system for interactive percutaneous pre-operation surgical planning
US8956165B2 (en) 2008-01-25 2015-02-17 University Of Florida Research Foundation, Inc. Devices and methods for implementing endoscopic surgical procedures and instruments within a virtual environment
CN101404039B (en) * 2008-03-28 2010-06-16 华南师范大学 Virtual operation method and its apparatus
ES2346025B2 (en) * 2008-06-02 2011-11-16 Universidad Rey Juan Carlos SYSTEM FOR THE SIMULATION OF SURGICAL PRACTICES.
JP4636146B2 (en) * 2008-09-05 2011-02-23 ソニー株式会社 Image processing method, image processing apparatus, program, and image processing system
US8315720B2 (en) * 2008-09-26 2012-11-20 Intuitive Surgical Operations, Inc. Method for graphically providing continuous change of state directions to a user of a medical robotic system
US8250001B2 (en) * 2008-12-18 2012-08-21 Motorola Mobility Llc Increasing user input accuracy on a multifunctional electronic device
US8948483B2 (en) * 2009-03-31 2015-02-03 Koninklijke Philips N.V. Automated contrast enhancement for contouring
US8009022B2 (en) * 2009-05-29 2011-08-30 Microsoft Corporation Systems and methods for immersive interaction with virtual objects
US20110050575A1 (en) * 2009-08-31 2011-03-03 Motorola, Inc. Method and apparatus for an adaptive touch screen display
US8717360B2 (en) 2010-01-29 2014-05-06 Zspace, Inc. Presenting a view within a three dimensional scene
DE102010009065B4 (en) 2010-02-23 2018-05-24 Deutsches Zentrum für Luft- und Raumfahrt e.V. Input device for medical minimally invasive robots or medical simulators and medical device with input device
WO2012047626A1 (en) 2010-09-27 2012-04-12 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Portable haptic force magnifier
US8786529B1 (en) 2011-05-18 2014-07-22 Zspace, Inc. Liquid crystal variable drive voltage
US9569594B2 (en) 2012-03-08 2017-02-14 Nuance Communications, Inc. Methods and apparatus for generating clinical reports
US9569593B2 (en) * 2012-03-08 2017-02-14 Nuance Communications, Inc. Methods and apparatus for generating clinical reports
US20130343640A1 (en) 2012-06-21 2013-12-26 Rethink Robotics, Inc. Vision-guided robots and methods of training them
US8532675B1 (en) 2012-06-27 2013-09-10 Blackberry Limited Mobile communication device user interface for manipulation of data items in a physical space
US9384528B2 (en) * 2014-05-28 2016-07-05 EchoPixel, Inc. Image annotation using a haptic plane
US9603668B2 (en) * 2014-07-02 2017-03-28 Covidien Lp Dynamic 3D lung map view for tool navigation inside the lung
EP3232421B1 (en) * 2014-12-11 2019-07-24 Terumo Kabushiki Kaisha Passage test device for medical long body, and method for evaluating passage of medical long body
CN114376733A (en) 2015-06-09 2022-04-22 直观外科手术操作公司 Configuring a surgical system using a surgical procedure atlas
US10912619B2 (en) * 2015-11-12 2021-02-09 Intuitive Surgical Operations, Inc. Surgical system with training or assist functions
US20210295048A1 (en) * 2017-01-24 2021-09-23 Tienovix, Llc System and method for augmented reality guidance for use of equipment systems
US20210327304A1 (en) * 2017-01-24 2021-10-21 Tienovix, Llc System and method for augmented reality guidance for use of equpment systems
CA3049148A1 (en) 2017-01-24 2018-08-02 Tietronix Software, Inc. System and method for three-dimensional augmented reality guidance for use of medical equipment
US20210327303A1 (en) * 2017-01-24 2021-10-21 Tienovix, Llc System and method for augmented reality guidance for use of equipment systems
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US20190051375A1 (en) 2017-08-10 2019-02-14 Nuance Communications, Inc. Automated clinical documentation system and method
CN111770737A (en) 2017-12-28 2020-10-13 奥博斯吉科有限公司 Special tactile hand controller for microsurgery
US11250382B2 (en) 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
US20190272895A1 (en) 2018-03-05 2019-09-05 Nuance Communications, Inc. System and method for review of automated clinical documentation
US11515020B2 (en) 2018-03-05 2022-11-29 Nuance Communications, Inc. Automated clinical documentation system and method
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802353A (en) * 1996-06-12 1998-09-01 General Electric Company Haptic computer modeling system
US5882206A (en) * 1995-03-29 1999-03-16 Gillio; Robert G. Virtual surgery system
US5952796A (en) * 1996-02-23 1999-09-14 Colgate; James E. Cobots
US6083163A (en) * 1997-01-21 2000-07-04 Computer Aided Surgery, Inc. Surgical navigation system and method using audio feedback
US6084587A (en) * 1996-08-02 2000-07-04 Sensable Technologies, Inc. Method and apparatus for generating and interfacing with a haptic virtual reality environment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6024576A (en) * 1996-09-06 2000-02-15 Immersion Corporation Hemispherical, high bandwidth mechanical interface for computer systems
US6650338B1 (en) * 1998-11-24 2003-11-18 Interval Research Corporation Haptic interaction with video and image data
US6529183B1 (en) * 1999-09-13 2003-03-04 Interval Research Corp. Manual interface combining continuous and discrete capabilities

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5882206A (en) * 1995-03-29 1999-03-16 Gillio; Robert G. Virtual surgery system
US5952796A (en) * 1996-02-23 1999-09-14 Colgate; James E. Cobots
US5802353A (en) * 1996-06-12 1998-09-01 General Electric Company Haptic computer modeling system
US6084587A (en) * 1996-08-02 2000-07-04 Sensable Technologies, Inc. Method and apparatus for generating and interfacing with a haptic virtual reality environment
US6083163A (en) * 1997-01-21 2000-07-04 Computer Aided Surgery, Inc. Surgical navigation system and method using audio feedback

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FLEUTE M ET AL: "BUILDING A COMPLETE SURFACE MODEL FROM SPARSE DATA USING STATISTICAL SHAPE MODELS: APPLICATION TO COMPUTER ASSISTED KNEE SURGERY", MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION. MICCAI. INTERNATIONAL CONFERENCE. PROCEEDINGS, October 1998 (1998-10-01), XP000913649 *
ROSEN J M ET AL: "EVOLUTION OF VIRTUAL REALITY FROM PLANNING TO PERFORMING SURGERY", IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE,US,IEEE INC. NEW YORK, vol. 15, no. 2, 1 March 1996 (1996-03-01), pages 16 - 22, XP000558485, ISSN: 0739-5175 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8767054B2 (en) * 2005-12-14 2014-07-01 Kolpi Viewing system for the manipulation of an object
US20090273665A1 (en) * 2005-12-14 2009-11-05 Olivier Kleindienst Viewing System for the Manipulation of an Object
WO2007068824A3 (en) * 2005-12-14 2007-08-09 Kolpi Sarl A Cv Viewing system for the manipulation of an object
JP2009519481A (en) * 2005-12-14 2009-05-14 コルピ サール ア シーヴィー Observation system for manipulating objects
FR2894684A1 (en) * 2005-12-14 2007-06-15 Kolpi Sarl VISUALIZATION SYSTEM FOR HANDLING AN OBJECT
WO2007068824A2 (en) * 2005-12-14 2007-06-21 Kolpi Sarl A Cv Viewing system for the manipulation of an object
US7925068B2 (en) * 2007-02-01 2011-04-12 General Electric Company Method and apparatus for forming a guide image for an ultrasound image scanner
EP3092969A3 (en) * 2009-11-13 2017-03-01 Intuitive Surgical Operations, Inc. A master finger tracking device and method of use in a minimally invasive surgical system
US9298260B2 (en) 2010-03-12 2016-03-29 Broadcom Corporation Tactile communication system with communications based on capabilities of a remote system
EP2365421A3 (en) * 2010-03-12 2015-04-15 Broadcom Corporation Tactile communication system
US9901402B2 (en) 2010-09-21 2018-02-27 Intuitive Surgical Operations, Inc. Method and apparatus for hand gesture control in a minimally invasive surgical system
US10543050B2 (en) 2010-09-21 2020-01-28 Intuitive Surgical Operations, Inc. Method and system for hand presence detection in a minimally invasive surgical system
US11707336B2 (en) 2010-09-21 2023-07-25 Intuitive Surgical Operations, Inc. Method and system for hand tracking in a robotic system
CN102646350B (en) * 2011-02-22 2013-12-11 上海理工大学 Centrum location device for virtual surgery force sense information acquisition
CN102646350A (en) * 2011-02-22 2012-08-22 上海理工大学 Centrum location device for virtual surgery force sense information acquisition
CN111202568A (en) * 2020-03-18 2020-05-29 杨红伟 Magnetic force feedback operation instrument of gynecological and obstetrical surgical robot

Also Published As

Publication number Publication date
US7236618B1 (en) 2007-06-26

Similar Documents

Publication Publication Date Title
US7236618B1 (en) Virtual surgery system with force feedback
Chu et al. Multi-sensory user interface for a virtual-reality-based computeraided design system
Delingette et al. Craniofacial surgery simulation testbed
Gannon et al. Tactum: a skin-centric approach to digital design and fabrication
Rosen et al. Evolution of virtual reality [Medicine]
Luboz et al. ImaGiNe Seldinger: first simulator for Seldinger technique and angiography training
Krapichler et al. VR interaction techniques for medical imaging applications
Smith et al. Mixed reality interaction and presentation techniques for medical visualisations
Suzuki et al. Sphere-filled organ model for virtual surgery system
Krapichler et al. Physicians in virtual environments—multimodal human–computer interaction
Burdea Virtual reality and robotics in medicine
Thalmann et al. Virtual reality software and technology
Hinckley et al. Three-dimensional user interface for neurosurgical visualization
Li et al. Continuous dynamic gesture spotting algorithm based on Dempster–Shafer Theory in the augmented reality human computer interaction
Porro et al. An integrated environment for plastic surgery support: building virtual patients, simulating interventions, and supporting intraoperative decisions
Krapichler et al. A human-machine interface for medical image analysis and visualization in virtual environments
Chou et al. Human-computer interactive simulation for the training of minimally invasive neurosurgery
Englmeier et al. Virtual reality and multimedia human-computer interaction in medicine
Subramanian Tangible interfaces for volume navigation
Robb The virtualization of medicine: a decade of pitfalls and progress
Müller et al. Virtual reality in the operating room of the future
Magnenat-Thalmann et al. Virtual reality software and technology
Krapichler et al. Human-machine interface for a VR-based medical imaging environment
Rahmani Hanzaki Surgical Training Using Proxy Haptics; A Pilot Study
EP4286991A1 (en) Guidance for medical interventions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP SG US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP