WO2011043645A1 - Display system and method for displaying a three dimensional model of an object - Google Patents

Display system and method for displaying a three dimensional model of an object Download PDF

Info

Publication number
WO2011043645A1
WO2011043645A1 PCT/NL2009/050607 NL2009050607W WO2011043645A1 WO 2011043645 A1 WO2011043645 A1 WO 2011043645A1 NL 2009050607 W NL2009050607 W NL 2009050607W WO 2011043645 A1 WO2011043645 A1 WO 2011043645A1
Authority
WO
WIPO (PCT)
Prior art keywords
view data
input device
display system
dimensional model
display
Prior art date
Application number
PCT/NL2009/050607
Other languages
French (fr)
Inventor
Jurriaan Derk Mulder
Original Assignee
Personal Space Technologies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Personal Space Technologies filed Critical Personal Space Technologies
Priority to PCT/NL2009/050607 priority Critical patent/WO2011043645A1/en
Publication of WO2011043645A1 publication Critical patent/WO2011043645A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the invention relates to a display system for displaying a three dimensional model of an object.
  • the invention further relates to a method for displaying a three dimensional model of such an object.
  • the input parameters of the rendering unit are input by manipulating a known input device such as for example a mouse, a joystick, or a gamepad.
  • moving a joystick to the left may rotate the viewed object along a vertical axis in one direction
  • moving a joystick to the right may rotate the viewed object along the same axis in the opposite direction
  • moving the joystick forward may bring the viewed object closer
  • moving it back may move the viewed object further away.
  • viewed object here is meant the (virtual) object as represented in the rendered view of the three dimensional data on the display.
  • a disadvantage of the known display systems is that manipulating the user interface is often a cumbersome process.
  • a more detailed look at the object is to obtained by moving the rendered object closer, which means that parts of the object view may fall outside of the display area, which can easily lead to the user loosing the feeling that she is interacting with a real object. This is even worse if the user intended to examine a part of the object that has moved out of view while zooming in on the object.
  • An object of the invention is achieved by providing a display system for displaying a three dimensional model of an object, the system comprising
  • an input device interface arranged to generate a first input signal representing a state of the first input device and a second input signal representing a state of the second input device
  • a processing unit arranged to receive the first and second input signals from the input device interface and arranged to determine a first model rendering parameter set based on the first input signal and independent of the second input signal and a second model rendering parameter set based on the first and second input signals,
  • a rendering unit arranged to receive the first model rendering parameter set and the second model rendering parameter set from the processing unit, to render as first view data a first part of the three dimensional model of the object according the first model rendering parameter set, to render as second view data a second part of the three dimensional model of the same object according the second model rendering parameter set, and to combine the first and second view data,
  • a display unit arranged to receive the combined view data from the rendering unit and to display said combined view data
  • a three dimensional model of an object is a collection of data that can for example comprise three dimensional vertex coordinates, volumetric textures, and/or surface textures that can be used to provide a two dimensional or three dimensional representation of an object.
  • the object may be of any type, a real object, virtual object, a measurement data set, etc.
  • the concepts "three dimensional data”, “three dimensional model data”, “three dimensional model”, “model data”, and “model” may be used interchangeably in this document.
  • a real object such as for example a historical artifact, may be captured as three dimensional data using for example a known 3D camera or a 3D scanner.
  • a three dimensional model of an object can also be a collection of measurement data, for example medical data such as from a CT scan, an MRI scan, or a fusion of various types of scans.
  • Three dimensional models can also be created manually, for example by a graphical artist.
  • a three-dimensional model can comprise more information than can be seen in a single representation on a two dimensional or even an three dimensional screen.
  • a three dimensional model of an object may provide texture data representing the outer surface of a closed object, and also provide data on parts contained within the closed object, which are normally not rendered visible.
  • data originating from more than one type of probe or measuring equipment may be provided, so different sets of data can be rendered.
  • Medical image fusion data sets form an example of three dimensional models having multiple data sets.
  • the three dimensional model comprises temporal data, and can thus be said to have a time dependence.
  • a three dimensional model may comprise model data sampled at different points in time, so that the rendering unit can render view data of the three dimensional model at different points in time. This makes it possible to show the model at different points in time in succession, as a real-time or not real-time animation.
  • the time dependence of a three dimensional model can also be implemented by applying a mathematical formula or a body of computer programming. In this document, it is implied that the mentioned three dimensional models, and the rendered view data thereof, may have a time dependence.
  • a three dimensional model of an object can be rendered, which means that view data is created based on the three dimensional data as would be captured or "seen” by a virtual camera placed at a certain position, having a certain vector called “up", a certain viewing direction, lens properties, and a field of view.
  • the position, viewing direction, up vector or up direction, and general properties of the virtual camera are input parameters, or model rendering parameters, of the rendering unit.
  • Model rendering parameter sets comprise the input needed for a renderer to render a view of a three dimensional model, not including the three dimensional model data itself.
  • the model rendering parameter set comprises the virtual camera location, camera view direction, camera up direction, and camera field of view.
  • the three dimensional model data has more than one data set, and the selection of which data set to render is also a model rendering parameter.
  • the model rendering parameter set comprises parameters related to the lighting of the virtual object represented by the three dimensional model.
  • View data, as output by the rendering unit may be a two-dimensional image, or it may be a stereo image suitable for a(n) (auto)stereoscopic display. It may also be in another viewing format suitable for a two dimensional, stereoscopic, or three dimensional display system.
  • a display unit may comprise a screen for displaying view data. It may further comprise additional devices, such as for example a display projector for projecting an image on the screen, or glasses for certain types of stereoscopic displays.
  • the phrase "displaying a three dimensional model” generally indicates rendering view data of a three dimensional model of an object, using input parameters as described above, and displaying the view data on a display unit.
  • the display unit comprises a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
  • the processing unit can be the central processing unit (CPU), and the rendering unit can be the graphics processing unit (GPU).
  • CPU central processing unit
  • GPU graphics processing unit
  • the invention is not limited to such a strict separation of processing unit and rendering unit. Both units may be implemented on the same piece of hardware, such as a semiconductor device.
  • the rendering unit may be implemented
  • the state of the input device comprises measurable quantities such as position and orientation of the device (in particular relative to the input device interface), but in other embodiments the state of the input devices also comprises derived values from these quantities, such as velocity (rate of change of position) or angular momentum (rate of change of orientation along an axis of the object), and even second derivative quantities, such as acceleration.
  • the input device is equipped with buttons, joystick controls, or other input sensors such as a grip sensor, temperature sensor, etc.
  • the state of the input device also comprises the state of these buttons, joystick controls, and any other sensors.
  • An advantage of the display system as described above is that it provides two input devices, for example one for each hand of the user, thus providing more degrees of freedom than a single such input device.
  • a further example of the system is that it renders two sets of view data based on three dimensional model data of the same object, thus offering two views of the object instead of one.
  • the first view data provides an overview of the object, while the second view data provides a view of a part of the object. By combining both views in an overlapping manner, the user can see they are views of the same object.
  • said two views are combined in such an overlapping manner, that from the position of the second view data with respect to the object shown in the first view data, it is clear which part of the object is shown in the second view data, which is advantageous since it allows a user to see at a single glance which part of the object is shown in the second view.
  • Combining two views in an overlapping manner does not necessarily mean that the rendering unit has to render the (invisible) part of the one view that is overlapped by the other view.
  • first view data is representative of substantially the entire object
  • second view data is representative of said second part of the object
  • the second view data is combined with the first view data in such a manner, that the second view data is displayed at the display unit at a second view data display position relative to the first view data display position that essentially corresponds to the position of the second part of the object relative to the position of the first part of the object.
  • the part of the object represented by the second view data substantially corresponds to the part of the same object that would be represented by the part of the first view data if the second view data did not overlap said part of the first view.
  • the two views are thus combined in a such an overlapping manner that the part of the object as shown in the second view data corresponds to the same part of the same object as would be shown in the first view data if the second view data did not overlap it.
  • the processing unit is arranged to determine the first model rendering parameter set independent of the second input signal. This advantageously allows a user to manipulate the first view data, generally representing an overview of the three dimensional model of an object, using a single input device.
  • the processing unit is arranged to determine the second model rendering parameter set based on the second input signal relative to the first input signal. In an embodiment, the processing unit is arranged to determine the second model rendering parameter set based on a signal representing the relative position of the second input device with respect to the first input device.
  • the first input device and/or the second input device is formed as a passive input device that is designed to be used while holding the input device in one hand. This is advantageous, since passive input devices require no power supply and no cable towards a central part, making them easier to handle. Designing an input device to be held in one hand, means that both input devices can be easily handled
  • the second view data shows a part of the object at a higher level of detail or a higher magnification factor than the first view data.
  • the second view data is rendered using three dimensional model data having a higher level of detail than the three dimensional model data used for rendering the first view data.
  • rendering units generally have a limited amount of resources for rendering three dimensional data, it is advantageous to only render a highly detailed three dimensional model when necessary, and to render a somewhat less detailed three dimensional model otherwise. In the display system, it can thus be advantageous to render the second view data using a more detailed three dimensional model than the first view data. Overall, rendering unit resources are spared, resulting in a faster performance of the display system.
  • the first and second view data are rendered using different data sets of the three dimensional model.
  • the three dimensional model comprises multiple data sets, for example one data set modeling the exterior of an object and one data set modeling parts within the object, it is advantageous to allow the user to view both data sets intuitively. This is most advantageously achieved by selecting the more recognizable data set, for example representing the object's exterior, for rendering the first view data, and another data set, for example representing a measurement of the interior of the object, for rendering the second view data.
  • the second view data is provided with a displayed symbol indicative of the difference between the second model rendering parameter set and the first model rendering parameter set.
  • a symbol might represent a magnification glass, if the second view data shows a more detailed or enlarged part of the object, as compared to the first view data. This advantageously gives the user a visual clue of the function of the second input device and the character of the second view data.
  • the symbol is a representation of the second input device.
  • a representation of the second input device is displayed on the display, and the second view data is displayed substantially inside the area occupied and encompassed by the representation of the second input device, and the first view data is displayed substantially outside said area. This advantageously reinforces the appearance of a functional link between the second input device and the second view data.
  • the input device interface comprises an object tracking unit for tracking the position and/or orientation of the first and/or the second input device. This advantageously allows an implementation using passive input devices.
  • the object tracking unit comprises an optical sensor and the first and/or second input device comprise optical reflecting markers. Measuring the reflection of light on a marker using an optical sensor allows the object tracking unit to determine the position of the markers, which, combined with a knowledge of the position of the markers on the input devices, makes it possible to work out a relative position and/or orientation of the input devices.
  • the three dimensional model of an object represents a physical object.
  • this object can be a historical artifact that is too precious or vulnerable to be handled by members of the public.
  • the display system advantageously allows members of the public to manipulate and analyze a three dimensional representation of the object as if it were the physical object.
  • the three dimensional model of an object is based on medical imaging data. In an embodiment the three dimensional model of an object is based on seismic data.
  • a display system according the invention can advantageously be used to examine and analyze data sets as obtained in for example medical imaging or seismic measurements.
  • the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device.
  • measuring a signal representing the relative position and orientation of the input devices allows these signals to be used as a basis for determining the rendering parameters of the three dimensional model. For example, by transferring relative translation and rotation applied by the user to the input devices to the rendered three dimensional model data, intuitive manipulation of the three dimensional model representation is achieved.
  • an input device is formed by a hand of the user.
  • an object tracking unit comprising a camera and a tracking processing unit arranged to determine a hand position and/or orientations based on optical data from the camera.
  • this removes the need for separate input devices.
  • the display system is arranged to set a current position and/or orientation of the first input device as the initial position and/or orientation of the first input device, corresponding to an initial first model rendering parameter set for rendering an initial, default, first view data of the three dimensional model of an object.
  • the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device. Mapping the relative orientation and position of an input device to an orientation and position of a displayed virtual object, provides an intuitive way to manipulate the position and orientation of said virtual object.
  • the first input device has a substantially dodecahedron shape. The dodecahedron shape has many sides to place markers on, making optical detection of the device's position and orientation easier.
  • the first input device is a replica of the object represented by the three dimensional model of an object. Linking the form of the first input device to the object shown on the display makes it easier for the user to understand that manipulating the first input device will have an effect on the displayed object.
  • the second input device is shaped like a magnifying glass. Linking form of the second input device to it's function makes understanding it's function easier.
  • the display system comprises a plurality of first input devices, and the system is arranged to detect which first input device is selected by a user.
  • the rendering unit has access to a plurality of three dimensional models, and is arranged to select a three dimensional model of an object for rendering based on the user selection of a first input device. This lets a user pick up one of a plurality of first input devices, automatically leading to a corresponding three dimensional model of an object to be shown. This removes the need for a manual selection of a three dimensional model.
  • the display system comprises a plurality of second input devices, and the system is arranged to detect which second input device is picked up by a user.
  • the rendering unit has access to a plurality of second model rendering parameter sets, and is arranged to select a second model rendering parameter set based on the user selection of a second input device. This allows a user to select a second view rendering mode by selecting one of a plurality of second input devices. For example, the user may pick up a second input device resembling a magnification glass to enable the second model rendering parameter set that gives more detailed and enlarged second view data.
  • the display unit comprises a display device capable of conveying a depth impression or a three dimensional display device. This can advantageously reinforce the illusion that the user is manipulating a real object.
  • the display system comprises at least an additional display unit, wherein said additional display unit is arranged to display the same three dimensional model of an object as the display unit. Having a second display can allow multiple people to examine the object as it is being manipulated and analyzed by a user.
  • the display system comprises the object which is represented by the three dimensional model of an object.
  • the invention provides a method for displaying a three dimensional model of an object, comprising the steps of
  • Figure 1 schematically shows a display system according the invention.
  • Figure 2A schematically shows a user interacting with a display system according the invention.
  • Figure 2B schematically shows a displayed image on a display system according the invention.
  • Figure 2C schematically shows a further displayed image on a display system according the invention.
  • Figure 3 schematically shows a first input device.
  • Figure 4 schematically shows a second input device.
  • Figure 5 schematically shows a first input device that is shaped as a replica of modeled data.
  • FIG. 1 schematically shows an embodiment of a display system 50 according the invention.
  • the shown display system 50 comprises a first input device 51 , a second input device 52, and an input device interface 53.
  • the input devices 51 and 52 are wireless, graspable devices. With wireless is meant that they are not physically connected to the input device interface 53. With graspable is meant that they are designed to be held in hand.
  • the input device interface 53 generates first and second input signals corresponding to the state of the first and second input devices 51 and 52 respectively.
  • the input device interface 53 comprises an object tracking unit (not shown).
  • the object tracking unit may comprise a scanning infrared source for emitting infrared light, an infrared sensor for detecting reflections of the emitted infrared light, and a tracking processing unit arranged to control the infrared source and to read and interpret the signals from the infrared sensor.
  • the input devices may comprise infrared reflecting markers or retro -reflective markers, with particular surfaces of the input device being provided with particular marker patterns. A skilled person will know how to arrange the before mentioned components in order to arrange a tracking unit that can be used to generate signals corresponding to the relative position and orientation of one, two, three, or more input devices. It is preferred to use infrared light, since it is not visible to the human eye.
  • infrared light instead of infrared light, another type of light or another type of electromagnetic radiation may be used.
  • An advantage of the infrared based tracking unit as described here is that the input devices can be passive, i.e. not requiring a battery or other source of power, which simplifies the use and maintenance of the input devices.
  • Alternative implementations may comprise active markers or inertial sensors, which within reach of a skilled person. Inertial sensors measure a quantity representing change of movement, which can be converted into a signal representing position and/or orientation.
  • a skilled person will know that other implementations comprising one or more input devices and an input device interface which generates signals corresponding to for example the positions and orientations of the input devices can be provided.
  • Such alternative systems are also within reach of a skilled person, and may for example involve Bluetooth, irDa, acoustic signals, or other wireless or wired interfacing standards.
  • Such a system may also involve a camera coupled to a computer vision device for measuring for example a position and orientation of an input device observed by the camera, which advantageously can remove the need to place markers on the input device.
  • the input device interface 53 makes the first and second input signals representing the state of the first and second input devices 51 and 52 respectively available to the processing unit 54.
  • the processing unit 54 is arranged to determine rendering parameters for the rendering unit 55 based on the received input signals.
  • the processing unit 54 accepts two input signals, corresponding to the state of two input devices, and makes two model rendering parameter sets available to the rendering unit 55.
  • a different number of input signals may be received, and/or a different number of model rendering parameter sets may be generated.
  • the processing unit can be a general purpose processing unit, such as a PC compatible central processing unit (CPU), running a software program that is stored on an internal or external memory (not shown).
  • the rendering unit 55 is connected to a memory unit (not shown). In the memory unit, three dimensional data can be stored, so that the rendering unit 55 can read it.
  • the rendering unit 55 is arranged to receive two model rendering parameter sets from the processing unit 54.
  • the rendering unit 55 is arranged to render view data of a three dimensional model of an object based on the render parameters as received from the processing unit 54.
  • the rendering unit generates view data, comprising the one or more rendered views, which it makes available to a display unit.
  • the three dimensional data is received from an external storage device, for example a storage device connected to a computer network.
  • a storage device may be part of a Picture Archiving and Communication System (PACS).
  • PACS Picture Archiving and Communication System
  • not all three dimensional model data is read before rendering begins. This technique is also known as procedural rendering.
  • the display unit 56 is arranged to receive view data from the rendering unit, and to make said view data visible on a user viewable part, such as a display screen.
  • the display unit 56 is an LCD monitor, and the view data conforms to an industry standard video signal, such as for example an HDTV 1920xl080p HDMI signal.
  • the display unit 56 comprises a display projector.
  • the display unit is an LCD monitor comprising a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
  • the display unit 56 is a display capable of conveying a depth impression, for example a stereoscopic display (with for example autoshutter or polarized glasses), an autostereoscopic display (for example based on barrier or lenticular technology), or another three dimensional display, and the view data is in a format suitable for the chosen display, for example stereo format, or image-plus-depth format.
  • the display unit may comprise a general three dimensional display device, such as a holographic display.
  • the rendering unit is used to render first view data from the three dimensional model data, where in the first view data substantially the entire object that the three dimensional model of an object represents, is visible.
  • the virtual camera is placed so that the viewing angle and the viewing distance are representative of the relative orientation and the relative position of the first input device. That is, if the user holding the first input device rotates the first input device counter clockwise, the rendered object in the view will also rotate counter clockwise.
  • the view is refreshed at a frequency that is suitable for video material, such as for example 60 times per second (60 Hz).
  • a rotation of the first input device around any axis of the device will result in a rotation of the rendered object around a corresponding axis.
  • a translation of the first input device in any direction will result in a translation of the rendered object in a similar direction.
  • the system is arranged to automatically detect and set a default orientation and position of the first input device which results in view data representing the entire object in what can be called a standard upright position, in other words, placing the virtual camera in a sensible starting position.
  • the system might detect the event of a user picking up the input device, and setting the subsequent position and orientation of the input device as corresponding to this starting position.
  • the advantage of this automatic detection is that the user is not required to move the input device to a pre-determined position and into a pre-determined orientation in order to see the object in the manner described, giving the user freedom to move around.
  • the rendering unit is used to render second view data from the three dimensional model data.
  • This second view can be a modified view of a part of the object, where the selected part and the details of the modification depend on the position and orientation of the first input device and the position and orientation of the second input device.
  • the second model rendering parameter set also describing the modification of the second view, are at least partially based on the position and orientation of the second input device relative to the position and orientation of the first input device.
  • the modification is a magnification or enlargement of a part of the object, optionally showing extra details not visible in the first view data.
  • the first input device can be said to represent the three dimensional object and the second input device can be said to represent a magnifying glass.
  • corresponding virtual magnifying glass is used in the rendering calculations of the rendering unit, resulting in a second view of the three dimensional model data which corresponds to what a user looking through a real magnifying glass at the real object would see, if a real magnifying glass were positioned and oriented with respect to the real object like the second device is positioned and oriented with respect to the first input device.
  • the first view data and the second view data are combined together to form view data for displaying.
  • This view data may be formed by taking the first view as a basis and superimposing the second view, with the modified or magnified view of the data, on top of a part of it. It is advantageous to let the second view which is superimposed on the first view have a circular circumference, resembling the shape of the magnifying glass.
  • the frame of a magnifying glass may be rendered as well.
  • some kind of transition area is created between the first view and the second view, so that the first will blend seamlessly into the second, creating a pleasing visual effect.
  • FIG 2A schematically shows a user 10 interacting with a display system 11 according the invention.
  • the user is holding a first input device 12 in his left hand, and a second input device 13 in his right hand.
  • the display system 11 comprises several visible parts: a house 15, a display 14, and an object tracking unit 16 for tracking the first and second input devices 12, 13.
  • the non visible parts of the display system which have been discussed in relation with figure 1, are in this embodiment located inside the display system house 15.
  • the display unit is external to the display system house 15.
  • Lines of sight a and b in figure 2A illustrate a possible way of interacting with the display system.
  • the user looks through the second input device 13 (here represented by an object shaped like a magnifying glass) at the first input device 12 (here represented by a multi-sided object).
  • the user looks at the display without input devices 12, 13 being in the line of sight b.
  • the user can manipulate the input devices and magnify or otherwise show modified parts of interesting parts of the object.
  • Use of the display system according the invention is not limited to looking along two lines of sight a and b as described in the above.
  • the second view data as shown on the display data is determined by the relative position and orientation of the second input device 13 with respect to the first input device 12.
  • the house 15 of the display system 11 also offers a display case for one or more physical objects for which the display system has three dimensional models. Said display case may also be placed somewhere else in the vicinity of the display system 11. This has several advantageous effects. In a museum context, for example, a visitor can scrutinize the original object, looking for details of interest, and then use the display system to examine close-ups of these details.
  • FIG. 2B shows an example of view data as displayed on the display unit 14.
  • the first part of the view data shows the rendered first view data 17, representing the object in the position and orientation representative of the position and orientation of the first input device.
  • second view data 18 is displayed surrounded by the first view data 17.
  • the second view data 18 represents a view of the object rendered with a larger magnification factor than the first view data is rendered with.
  • the view data of figure 2B can be created by superimposing second view data 18 on top of first view data 17 which fills substantially the entire display area, whereby the second view data 18 surrounded by the displayed magnifying glass 19 is placed at a relative position on the display so that the illusion is created that second view data 18 is seen through a magnifying glass that is approximately represented by the displayed magnifying glass 19.
  • a rendering unit can realize a magnifying glass effect as shown in figure 2B. Examples are simply rendering two views and combining them, possibly superimposing one on the other. An alternative is to actually add a three dimensional model of a magnifying glass, including lens, to the three dimensional model data and rendering the view in one go using a suitable technique such as raytracing.
  • the notion of a first and a second view, where the second view shows a modified version of the first view also applies to said ray-tracing technique, even if it appears that both views are rendered simultaneously as a single view.
  • Beside superimposing views and raytracing, other techniques to obtain the effect will be known to a skilled person.
  • the invention is not limited to any of the techniques described in this document.
  • Figure 2C shows an alternative display of view data on a display unit 14.
  • the first view 17 shows the object as described in relation to figure 2B, but the second view 20, surrounded by a displayed circle to clearly separate the first and second view data, shows a cross section of the object.
  • three dimensional model data from (3D) Ultrasound, X-Ray, CT Scan, or Magnetic Resonance Imaging (MRI) may be used, as is commonly done for certain types of historical artifacts.
  • MRI Magnetic Resonance Imaging
  • This also an example of having multiple modalities in the three dimensional model data.
  • the outside appearance of the object, including its outer shape and surface texture, is shown in the first view 17 (modality one)
  • the, for example, MRI data is used in the second view (modality two).
  • Figure 3 shows a first input device 12 according to an embodiment of the invention.
  • the first input device 12 has multiple sides, which are each provided with markers 21 in certain pattern.
  • the first input device 12 shown here is shaped like a dodecahedron, but the invention is not limited to similarly shaped first input devices. Essentially any object to which markers can be attached can be used as a first input device.
  • the object tracking unit of the display system is arranged to detect the markers on the first input device which are visible to the tracking unit's sensor.
  • the implementation may comprise an infrared light source, infrared reflecting markers, and an infrared sensor.
  • the tracking processing unit has access to stored data regarding the locations of the markers on the first input device.
  • the tracking processing unit is arranged to match this knowledge against the pattern of markers currently being detected, and from that work out the current position and orientation of the first input device. This is a known technique to determine a orientation and position of an object, and a skilled person will be able to implement this. A skilled person will also know how to make this system robust, for example, by handling the event of a marker being partially or fully covered by the hand of the user holding the first input device.
  • the invention is not limited to the use of markers as discussed in relation with figure 3 and before.
  • an alternative is a computer vision approach where a visible light camera follows the input device, and a processing unit determines the location and position of the input device from the tracked shape.
  • Figure 4 shows a second input device 13 according to an embodiment of the invention.
  • the second input device 13 is provided with a number of markers in a specific pattern. The determination of the orientation and position of the second input device 13 is done as it is done for the first input device 12.
  • the second input device has a shape that resembles a magnifying glass. This will make it distinct from the first input device 12, and also acts as a visual clue for the user regarding the function of the second input device.
  • the display system provides a number of different second input devices.
  • One might be shaped like a magnifying glass 13 as in figure 4, whereas another can be shaped like for example a triangular MRI warning symbol.
  • the system will use a different second view rendering mode. For example, if the user picks up the magnifying glass, the magnification mode as discussed in relation with figure 2B is used, whereas if the user picks up the MRI symbol, the cross-section mode as discussed in relation with figure 2C is used. Detection of which second input devices is picked by the user, might be achieved by applying a different pattern of markers to each of the second input devices.
  • Figure 5 shows an alternative form for a first input device according the invention, where the first input device is not shaped like the multi-sided object of figure 3, but actually shaped to resemble the original object of which a three dimensional model is displayed.
  • This has the advantage that it is more intuitive, and thus easier, for the user to manipulate an object, the rotations and translations of which are applied to the viewed object representation, when that object resembles the viewed object
  • a single display system may have access to three dimensional model data for a number of original objects.
  • the display system may also be provided with a number of first input devices, each first input device being shaped like one of the original objects.
  • first input devices for example removing it from a tray on the display system
  • the display system will register this event and load the three dimensional model data corresponding to the object which said first input device resembles into the rendering unit. Detection of which first input device was picked up, and therefore which model to load, might be accomplished, for example, by applying a different marker pattern to each of the first input devices.
  • the display system may be provided with at least one additional display unit.
  • Such an additional display unit can be used in various ways. For example in “slave mode", the additional display unit may receive the same view data as the (main) display unit. An advantage of such a setup is that it can be used to give presentations, where the viewers of the presentation watch the images on an additional display unit, and the (main) display unit is used by the presenter.
  • the additional display unit may receive additional view data, based on the state of additional first and second input devices, but representing the same three dimensional model of an object as the model visible on the main display unit.

Abstract

The invention provides a display system for displaying a three dimensional model of an object and a method for displaying a three dimensional model of an object. In an embodiment, the display system provides two input devices. In an embodiment the system renders two sets of view data based on three dimensional model data of the same object, thus offering two views of the object. In an embodiment, said two views are displayed in such a manner, that from the position of the second view data with respect to the object shown in the first view data, it is clear which part of the object is shown in the second view data, which is advantageous since it allows a userto see at a single glance which part of the object is shown in the second view.

Description

DISPLAY SYSTEM AND METHOD FOR DISPLAYING A THREE
DIMENSIONAL MODEL OF AN OBJECT
The invention relates to a display system for displaying a three dimensional model of an object. The invention further relates to a method for displaying a three dimensional model of such an object.
In cases where it is undesirable or impossible to examine or manipulate a physical object, it can be advantageous to instead examine and manipulate a rendered view of a three dimensional model of the object. This is for example the case with historical artifacts which are too precious to be handled by members of the public, or with scanned medical data which is not available for inspection other than as a three dimensional model of an object. In known display systems for displaying a three dimensional model of an object, the input parameters of the rendering unit, determining for example from what angle and from which distance the object is viewed, are input by manipulating a known input device such as for example a mouse, a joystick, or a gamepad. For example, moving a joystick to the left may rotate the viewed object along a vertical axis in one direction, moving a joystick to the right may rotate the viewed object along the same axis in the opposite direction, moving the joystick forward may bring the viewed object closer, and moving it back may move the viewed object further away. With "viewed object" here is meant the (virtual) object as represented in the rendered view of the three dimensional data on the display.
A disadvantage of the known display systems is that manipulating the user interface is often a cumbersome process. In addition, in known display system a more detailed look at the object is to obtained by moving the rendered object closer, which means that parts of the object view may fall outside of the display area, which can easily lead to the user loosing the feeling that she is interacting with a real object. This is even worse if the user intended to examine a part of the object that has moved out of view while zooming in on the object. It is an object of the invention to provide a display system for displaying a three dimensional model of an object with an improved way to view a three dimensional model of an object. It is a further object of the invention to provide a method for displaying a three dimensional model of an object with an improved way to view a three dimensional model of such an object.
An object of the invention is achieved by providing a display system for displaying a three dimensional model of an object, the system comprising
- a first input device,
- a second input device,
- an input device interface, arranged to generate a first input signal representing a state of the first input device and a second input signal representing a state of the second input device,
- a processing unit, arranged to receive the first and second input signals from the input device interface and arranged to determine a first model rendering parameter set based on the first input signal and independent of the second input signal and a second model rendering parameter set based on the first and second input signals,
- a rendering unit arranged to receive the first model rendering parameter set and the second model rendering parameter set from the processing unit, to render as first view data a first part of the three dimensional model of the object according the first model rendering parameter set, to render as second view data a second part of the three dimensional model of the same object according the second model rendering parameter set, and to combine the first and second view data,
- a display unit, arranged to receive the combined view data from the rendering unit and to display said combined view data,
wherein the second view data is combined with the first view data in an overlapping manner.
In the context of this document, a three dimensional model of an object is a collection of data that can for example comprise three dimensional vertex coordinates, volumetric textures, and/or surface textures that can be used to provide a two dimensional or three dimensional representation of an object. The object may be of any type, a real object, virtual object, a measurement data set, etc. The concepts "three dimensional data", "three dimensional model data", "three dimensional model", "model data", and "model" may be used interchangeably in this document.
A real object, such as for example a historical artifact, may be captured as three dimensional data using for example a known 3D camera or a 3D scanner. A three dimensional model of an object can also be a collection of measurement data, for example medical data such as from a CT scan, an MRI scan, or a fusion of various types of scans. Three dimensional models can also be created manually, for example by a graphical artist. A three-dimensional model can comprise more information than can be seen in a single representation on a two dimensional or even an three dimensional screen. For example, a three dimensional model of an object may provide texture data representing the outer surface of a closed object, and also provide data on parts contained within the closed object, which are normally not rendered visible. In medical three dimensional models, data originating from more than one type of probe or measuring equipment may be provided, so different sets of data can be rendered.
Medical image fusion data sets form an example of three dimensional models having multiple data sets.
In an embodiment, the three dimensional model comprises temporal data, and can thus be said to have a time dependence. For example, a three dimensional model may comprise model data sampled at different points in time, so that the rendering unit can render view data of the three dimensional model at different points in time. This makes it possible to show the model at different points in time in succession, as a real-time or not real-time animation. The time dependence of a three dimensional model can also be implemented by applying a mathematical formula or a body of computer programming. In this document, it is implied that the mentioned three dimensional models, and the rendered view data thereof, may have a time dependence.
A three dimensional model of an object can be rendered, which means that view data is created based on the three dimensional data as would be captured or "seen" by a virtual camera placed at a certain position, having a certain vector called "up", a certain viewing direction, lens properties, and a field of view. In an embodiment, the position, viewing direction, up vector or up direction, and general properties of the virtual camera are input parameters, or model rendering parameters, of the rendering unit. Model rendering parameter sets comprise the input needed for a renderer to render a view of a three dimensional model, not including the three dimensional model data itself. In an embodiment, the model rendering parameter set comprises the virtual camera location, camera view direction, camera up direction, and camera field of view. In an embodiment, the three dimensional model data has more than one data set, and the selection of which data set to render is also a model rendering parameter. In an embodiment, the model rendering parameter set comprises parameters related to the lighting of the virtual object represented by the three dimensional model. View data, as output by the rendering unit, may be a two-dimensional image, or it may be a stereo image suitable for a(n) (auto)stereoscopic display. It may also be in another viewing format suitable for a two dimensional, stereoscopic, or three dimensional display system. A display unit may comprise a screen for displaying view data. It may further comprise additional devices, such as for example a display projector for projecting an image on the screen, or glasses for certain types of stereoscopic displays. In the context of this document, the phrase "displaying a three dimensional model" generally indicates rendering view data of a three dimensional model of an object, using input parameters as described above, and displaying the view data on a display unit.
In an embodiment, the display unit comprises a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
In a typical personal computer (PC) device, the processing unit can be the central processing unit (CPU), and the rendering unit can be the graphics processing unit (GPU). However, the invention is not limited to such a strict separation of processing unit and rendering unit. Both units may be implemented on the same piece of hardware, such as a semiconductor device. The rendering unit may be implemented
predominantly in hardware, software, or a general mix of hardware and software. The state of the input device comprises measurable quantities such as position and orientation of the device (in particular relative to the input device interface), but in other embodiments the state of the input devices also comprises derived values from these quantities, such as velocity (rate of change of position) or angular momentum (rate of change of orientation along an axis of the object), and even second derivative quantities, such as acceleration. In an embodiment the input device is equipped with buttons, joystick controls, or other input sensors such as a grip sensor, temperature sensor, etc. In this case, the state of the input device also comprises the state of these buttons, joystick controls, and any other sensors.
An advantage of the display system as described above is that it provides two input devices, for example one for each hand of the user, thus providing more degrees of freedom than a single such input device. A further example of the system is that it renders two sets of view data based on three dimensional model data of the same object, thus offering two views of the object instead of one. The first view data provides an overview of the object, while the second view data provides a view of a part of the object. By combining both views in an overlapping manner, the user can see they are views of the same object. In an embodiment, said two views are combined in such an overlapping manner, that from the position of the second view data with respect to the object shown in the first view data, it is clear which part of the object is shown in the second view data, which is advantageous since it allows a user to see at a single glance which part of the object is shown in the second view. Combining two views in an overlapping manner does not necessarily mean that the rendering unit has to render the (invisible) part of the one view that is overlapped by the other view.
In an embodiment the first view data is representative of substantially the entire object, and the second view data is representative of said second part of the object.
In an embodiment the second view data is combined with the first view data in such a manner, that the second view data is displayed at the display unit at a second view data display position relative to the first view data display position that essentially corresponds to the position of the second part of the object relative to the position of the first part of the object. In an embodiment, the part of the object represented by the second view data substantially corresponds to the part of the same object that would be represented by the part of the first view data if the second view data did not overlap said part of the first view.
In the mentioned embodiments, the two views are thus combined in a such an overlapping manner that the part of the object as shown in the second view data corresponds to the same part of the same object as would be shown in the first view data if the second view data did not overlap it. This advantageously helps the user to determine which part of the object is shown in the second view, by displaying the second view data on the corresponding position in the first view data.
In an embodiment, the processing unit is arranged to determine the first model rendering parameter set independent of the second input signal. This advantageously allows a user to manipulate the first view data, generally representing an overview of the three dimensional model of an object, using a single input device.
In an embodiment, the processing unit is arranged to determine the second model rendering parameter set based on the second input signal relative to the first input signal. In an embodiment, the processing unit is arranged to determine the second model rendering parameter set based on a signal representing the relative position of the second input device with respect to the first input device. In an embodiment, the first input device and/or the second input device is formed as a passive input device that is designed to be used while holding the input device in one hand. This is advantageous, since passive input devices require no power supply and no cable towards a central part, making them easier to handle. Designing an input device to be held in one hand, means that both input devices can be easily handled
simultaneously by a user by holding each in one hand. Input devices designed to be held in one hand can be said to be "graspable". In an embodiment according the invention, the second view data shows a part of the object at a higher level of detail or a higher magnification factor than the first view data. An advantage is that the display system shows two views of the object at the same time. One overall view, in the form of the first view data, at a standard level of detail, and one view, in the form of the second view data, at a higher level of detail. This might also be described as showing the object with a higher magnification factor. This is advantageous, since the user can simultaneously see (most of) the entire object, and a detailed or magnified part thereof, for example to study details. In an embodiment, the amount of additional details shown depends on the distance between the first and the second input device. This allows easy manipulation of the amount of details shown in the second view data.
In an embodiment according the invention, the second view data is rendered using three dimensional model data having a higher level of detail than the three dimensional model data used for rendering the first view data.
Since rendering units generally have a limited amount of resources for rendering three dimensional data, it is advantageous to only render a highly detailed three dimensional model when necessary, and to render a somewhat less detailed three dimensional model otherwise. In the display system, it can thus be advantageous to render the second view data using a more detailed three dimensional model than the first view data. Overall, rendering unit resources are spared, resulting in a faster performance of the display system.
In an embodiment according the invention, the first and second view data are rendered using different data sets of the three dimensional model.
When the three dimensional model comprises multiple data sets, for example one data set modeling the exterior of an object and one data set modeling parts within the object, it is advantageous to allow the user to view both data sets intuitively. This is most advantageously achieved by selecting the more recognizable data set, for example representing the object's exterior, for rendering the first view data, and another data set, for example representing a measurement of the interior of the object, for rendering the second view data.
In an embodiment according the invention, the second view data is provided with a displayed symbol indicative of the difference between the second model rendering parameter set and the first model rendering parameter set. Such a symbol might represent a magnification glass, if the second view data shows a more detailed or enlarged part of the object, as compared to the first view data. This advantageously gives the user a visual clue of the function of the second input device and the character of the second view data.
In an embodiment according the invention, the symbol is a representation of the second input device. In an embodiment, a representation of the second input device is displayed on the display, and the second view data is displayed substantially inside the area occupied and encompassed by the representation of the second input device, and the first view data is displayed substantially outside said area. This advantageously reinforces the appearance of a functional link between the second input device and the second view data. In an embodiment, the input device interface comprises an object tracking unit for tracking the position and/or orientation of the first and/or the second input device. This advantageously allows an implementation using passive input devices.
In an embodiment, the object tracking unit comprises an optical sensor and the first and/or second input device comprise optical reflecting markers. Measuring the reflection of light on a marker using an optical sensor allows the object tracking unit to determine the position of the markers, which, combined with a knowledge of the position of the markers on the input devices, makes it possible to work out a relative position and/or orientation of the input devices.
In an embodiment according the invention, the three dimensional model of an object represents a physical object. For example, this object can be a historical artifact that is too precious or vulnerable to be handled by members of the public. The display system advantageously allows members of the public to manipulate and analyze a three dimensional representation of the object as if it were the physical object.
In an embodiment the three dimensional model of an object is based on medical imaging data. In an embodiment the three dimensional model of an object is based on seismic data. A display system according the invention can advantageously be used to examine and analyze data sets as obtained in for example medical imaging or seismic measurements.
In an embodiment according the invention, the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device.
Advantageously, measuring a signal representing the relative position and orientation of the input devices allows these signals to be used as a basis for determining the rendering parameters of the three dimensional model. For example, by transferring relative translation and rotation applied by the user to the input devices to the rendered three dimensional model data, intuitive manipulation of the three dimensional model representation is achieved.
In an embodiment, an input device is formed by a hand of the user. This can for example be achieved using an object tracking unit comprising a camera and a tracking processing unit arranged to determine a hand position and/or orientations based on optical data from the camera. Advantageously, this removes the need for separate input devices.
In an embodiment, the display system is arranged to set a current position and/or orientation of the first input device as the initial position and/or orientation of the first input device, corresponding to an initial first model rendering parameter set for rendering an initial, default, first view data of the three dimensional model of an object. An advantage is that regardless of the exact absolute position and/or orientation of the first input device held by the user, the system can calibrate itself to use said position and/or orientation as an initial position and/or orientation, corresponding to a rendering of the three dimensional object in a natural orientation and at a natural display scale, so that the user is not required to place himself at a specific position in front of the display device in order to see a recognizable initial rendering of the three dimensional object.
In an embodiment, the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device. Mapping the relative orientation and position of an input device to an orientation and position of a displayed virtual object, provides an intuitive way to manipulate the position and orientation of said virtual object. In an embodiment, the first input device has a substantially dodecahedron shape. The dodecahedron shape has many sides to place markers on, making optical detection of the device's position and orientation easier.
In an embodiment, the first input device is a replica of the object represented by the three dimensional model of an object. Linking the form of the first input device to the object shown on the display makes it easier for the user to understand that manipulating the first input device will have an effect on the displayed object.
In an embodiment, the second input device is shaped like a magnifying glass. Linking form of the second input device to it's function makes understanding it's function easier.
In an embodiment the display system comprises a plurality of first input devices, and the system is arranged to detect which first input device is selected by a user.
In an embodiment, the rendering unit has access to a plurality of three dimensional models, and is arranged to select a three dimensional model of an object for rendering based on the user selection of a first input device. This lets a user pick up one of a plurality of first input devices, automatically leading to a corresponding three dimensional model of an object to be shown. This removes the need for a manual selection of a three dimensional model. In an embodiment the display system comprises a plurality of second input devices, and the system is arranged to detect which second input device is picked up by a user.
In an embodiment the rendering unit has access to a plurality of second model rendering parameter sets, and is arranged to select a second model rendering parameter set based on the user selection of a second input device. This allows a user to select a second view rendering mode by selecting one of a plurality of second input devices. For example, the user may pick up a second input device resembling a magnification glass to enable the second model rendering parameter set that gives more detailed and enlarged second view data.
In an embodiment the display unit comprises a display device capable of conveying a depth impression or a three dimensional display device. This can advantageously reinforce the illusion that the user is manipulating a real object. In an embodiment the display system comprises at least an additional display unit, wherein said additional display unit is arranged to display the same three dimensional model of an object as the display unit. Having a second display can allow multiple people to examine the object as it is being manipulated and analyzed by a user. In an embodiment the display system comprises the object which is represented by the three dimensional model of an object. By placing the actual physical object near the display unit and input devices, allows a user to first investigate the actual object for interesting areas, followed by a more thorough analysis of the interesting areas using the three dimensional model of the object
The invention provides a method for displaying a three dimensional model of an object, comprising the steps of
- generating a first input signal representing a state of a first input device - generating a second input signal representing a state of a second input device,
- determining a first model rendering parameter set based on the first input signal,
- determining a second model rendering parameter set, based on the first input signal and on the second input signal,
- rendering first view data of the three dimensional model of an object according the first model rendering parameter set,
- rendering second view data of the three dimensional model of the same object according the second model rendering parameter set,
- combining the first and second view data in an overlapping manner into combined view data,
- displaying the view data.
This method allows a user to more easily manipulate and analyze a three dimensional model of an object. BRIEF DESCRIPTION OF THE FIGURES
Figure 1 schematically shows a display system according the invention.
Figure 2A schematically shows a user interacting with a display system according the invention.
Figure 2B schematically shows a displayed image on a display system according the invention. Figure 2C schematically shows a further displayed image on a display system according the invention.
Figure 3 schematically shows a first input device. Figure 4 schematically shows a second input device.
Figure 5 schematically shows a first input device that is shaped as a replica of modeled data. DETAILED DESCRIPTION
Figure 1 schematically shows an embodiment of a display system 50 according the invention. The shown display system 50 comprises a first input device 51 , a second input device 52, and an input device interface 53. In an embodiment, the input devices 51 and 52 are wireless, graspable devices. With wireless is meant that they are not physically connected to the input device interface 53. With graspable is meant that they are designed to be held in hand. The input device interface 53 generates first and second input signals corresponding to the state of the first and second input devices 51 and 52 respectively.
The input device interface 53 comprises an object tracking unit (not shown). The object tracking unit may comprise a scanning infrared source for emitting infrared light, an infrared sensor for detecting reflections of the emitted infrared light, and a tracking processing unit arranged to control the infrared source and to read and interpret the signals from the infrared sensor. The input devices may comprise infrared reflecting markers or retro -reflective markers, with particular surfaces of the input device being provided with particular marker patterns. A skilled person will know how to arrange the before mentioned components in order to arrange a tracking unit that can be used to generate signals corresponding to the relative position and orientation of one, two, three, or more input devices. It is preferred to use infrared light, since it is not visible to the human eye. However, according to the invention instead of infrared light, another type of light or another type of electromagnetic radiation may be used. An advantage of the infrared based tracking unit as described here is that the input devices can be passive, i.e. not requiring a battery or other source of power, which simplifies the use and maintenance of the input devices. Alternative implementations may comprise active markers or inertial sensors, which within reach of a skilled person. Inertial sensors measure a quantity representing change of movement, which can be converted into a signal representing position and/or orientation.
A skilled person will know that other implementations comprising one or more input devices and an input device interface which generates signals corresponding to for example the positions and orientations of the input devices can be provided. Such alternative systems are also within reach of a skilled person, and may for example involve Bluetooth, irDa, acoustic signals, or other wireless or wired interfacing standards. Such a system may also involve a camera coupled to a computer vision device for measuring for example a position and orientation of an input device observed by the camera, which advantageously can remove the need to place markers on the input device.
The input device interface 53 makes the first and second input signals representing the state of the first and second input devices 51 and 52 respectively available to the processing unit 54. The processing unit 54 is arranged to determine rendering parameters for the rendering unit 55 based on the received input signals. In an embodiment, the processing unit 54 accepts two input signals, corresponding to the state of two input devices, and makes two model rendering parameter sets available to the rendering unit 55. However, in other embodiments a different number of input signals may be received, and/or a different number of model rendering parameter sets may be generated. The processing unit can be a general purpose processing unit, such as a PC compatible central processing unit (CPU), running a software program that is stored on an internal or external memory (not shown).
The rendering unit 55 is connected to a memory unit (not shown). In the memory unit, three dimensional data can be stored, so that the rendering unit 55 can read it. The rendering unit 55 is arranged to receive two model rendering parameter sets from the processing unit 54. The rendering unit 55 is arranged to render view data of a three dimensional model of an object based on the render parameters as received from the processing unit 54. The rendering unit generates view data, comprising the one or more rendered views, which it makes available to a display unit. In an alternative
embodiment, the three dimensional data is received from an external storage device, for example a storage device connected to a computer network. Such a storage device may be part of a Picture Archiving and Communication System (PACS). In a further embodiment, not all three dimensional model data is read before rendering begins. This technique is also known as procedural rendering. The display unit 56 is arranged to receive view data from the rendering unit, and to make said view data visible on a user viewable part, such as a display screen. In an embodiment, the display unit 56 is an LCD monitor, and the view data conforms to an industry standard video signal, such as for example an HDTV 1920xl080p HDMI signal. In another embodiment, the display unit 56 comprises a display projector.
In an embodiment, the display unit is an LCD monitor comprising a touch screen, arranged to offer a user interface to control aspects of the display system. For example, a user might use such a user interface to select a three dimensional model of an object for viewing.
In a further embodiment, the display unit 56 is a display capable of conveying a depth impression, for example a stereoscopic display (with for example autoshutter or polarized glasses), an autostereoscopic display (for example based on barrier or lenticular technology), or another three dimensional display, and the view data is in a format suitable for the chosen display, for example stereo format, or image-plus-depth format. The display unit may comprise a general three dimensional display device, such as a holographic display. The rendering unit is used to render first view data from the three dimensional model data, where in the first view data substantially the entire object that the three dimensional model of an object represents, is visible. According to the first model rendering parameter set, the virtual camera is placed so that the viewing angle and the viewing distance are representative of the relative orientation and the relative position of the first input device. That is, if the user holding the first input device rotates the first input device counter clockwise, the rendered object in the view will also rotate counter clockwise. The view is refreshed at a frequency that is suitable for video material, such as for example 60 times per second (60 Hz). In general, a rotation of the first input device around any axis of the device will result in a rotation of the rendered object around a corresponding axis. Similarly, a translation of the first input device in any direction will result in a translation of the rendered object in a similar direction. This can for example be implemented by adjusting the position of the virtual camera and the camera up vector in the rendering process accordingly. It is advantageous if the system is arranged to automatically detect and set a default orientation and position of the first input device which results in view data representing the entire object in what can be called a standard upright position, in other words, placing the virtual camera in a sensible starting position. For example, the system might detect the event of a user picking up the input device, and setting the subsequent position and orientation of the input device as corresponding to this starting position. The advantage of this automatic detection is that the user is not required to move the input device to a pre-determined position and into a pre-determined orientation in order to see the object in the manner described, giving the user freedom to move around.
The rendering unit is used to render second view data from the three dimensional model data. This second view can be a modified view of a part of the object, where the selected part and the details of the modification depend on the position and orientation of the first input device and the position and orientation of the second input device. The second model rendering parameter set, also describing the modification of the second view, are at least partially based on the position and orientation of the second input device relative to the position and orientation of the first input device. In an embodiment, the modification is a magnification or enlargement of a part of the object, optionally showing extra details not visible in the first view data. In a further embodiment, the first input device can be said to represent the three dimensional object and the second input device can be said to represent a magnifying glass. A
corresponding virtual magnifying glass is used in the rendering calculations of the rendering unit, resulting in a second view of the three dimensional model data which corresponds to what a user looking through a real magnifying glass at the real object would see, if a real magnifying glass were positioned and oriented with respect to the real object like the second device is positioned and oriented with respect to the first input device.
The first view data and the second view data are combined together to form view data for displaying. This view data may be formed by taking the first view as a basis and superimposing the second view, with the modified or magnified view of the data, on top of a part of it. It is advantageous to let the second view which is superimposed on the first view have a circular circumference, resembling the shape of the magnifying glass. In an embodiment, the frame of a magnifying glass may be rendered as well. In a further embodiment, some kind of transition area is created between the first view and the second view, so that the first will blend seamlessly into the second, creating a pleasing visual effect.
Figure 2A schematically shows a user 10 interacting with a display system 11 according the invention. The user is holding a first input device 12 in his left hand, and a second input device 13 in his right hand. The display system 11 comprises several visible parts: a house 15, a display 14, and an object tracking unit 16 for tracking the first and second input devices 12, 13. The non visible parts of the display system, which have been discussed in relation with figure 1, are in this embodiment located inside the display system house 15. In another embodiment, the display unit is external to the display system house 15.
Lines of sight a and b in figure 2A illustrate a possible way of interacting with the display system. Along line of sight a, the user looks through the second input device 13 (here represented by an object shaped like a magnifying glass) at the first input device 12 (here represented by a multi-sided object). Along line of sight b, the user looks at the display without input devices 12, 13 being in the line of sight b. By switching between lines of sight a and b, the user can manipulate the input devices and magnify or otherwise show modified parts of interesting parts of the object. Use of the display system according the invention is not limited to looking along two lines of sight a and b as described in the above. In fact, a more experience user will most likely look mostly along line of sight b, while his hands can manipulate input devices 12, 13 unseen. Notwithstanding the use of line of sight a, in an embodiment the second view data as shown on the display data is determined by the relative position and orientation of the second input device 13 with respect to the first input device 12. In an embodiment, the house 15 of the display system 11 also offers a display case for one or more physical objects for which the display system has three dimensional models. Said display case may also be placed somewhere else in the vicinity of the display system 11. This has several advantageous effects. In a museum context, for example, a visitor can scrutinize the original object, looking for details of interest, and then use the display system to examine close-ups of these details. Another advantage is that the possibility for the visitor to compare the three dimensional model of an object on the display system with the actual object in a display case nearby, can give the visitor more confidence that the three dimensional model of an object is an accurate representation of the original object. Another example is that, for example during a guided tour, a museum guide can give verbal information while using the display system to highlight relevant parts of the object, to a group of people which is otherwise too large for all members to see the original object during the explanation. Figure 2B shows an example of view data as displayed on the display unit 14. The first part of the view data shows the rendered first view data 17, representing the object in the position and orientation representative of the position and orientation of the first input device. In the round area inside the lens of the displayed magnifying glass 19, second view data 18 is displayed surrounded by the first view data 17. The second view data 18 represents a view of the object rendered with a larger magnification factor than the first view data is rendered with.
The view data of figure 2B can be created by superimposing second view data 18 on top of first view data 17 which fills substantially the entire display area, whereby the second view data 18 surrounded by the displayed magnifying glass 19 is placed at a relative position on the display so that the illusion is created that second view data 18 is seen through a magnifying glass that is approximately represented by the displayed magnifying glass 19. A skilled person will know that there are various ways in which a rendering unit can realize a magnifying glass effect as shown in figure 2B. Examples are simply rendering two views and combining them, possibly superimposing one on the other. An alternative is to actually add a three dimensional model of a magnifying glass, including lens, to the three dimensional model data and rendering the view in one go using a suitable technique such as raytracing. According the invention, the notion of a first and a second view, where the second view shows a modified version of the first view, also applies to said ray-tracing technique, even if it appears that both views are rendered simultaneously as a single view. Beside superimposing views and raytracing, other techniques to obtain the effect will be known to a skilled person. The invention is not limited to any of the techniques described in this document.
Figure 2C shows an alternative display of view data on a display unit 14. In this case, the first view 17 shows the object as described in relation to figure 2B, but the second view 20, surrounded by a displayed circle to clearly separate the first and second view data, shows a cross section of the object. For this view, three dimensional model data from (3D) Ultrasound, X-Ray, CT Scan, or Magnetic Resonance Imaging (MRI) may be used, as is commonly done for certain types of historical artifacts. This also an example of having multiple modalities in the three dimensional model data. The outside appearance of the object, including its outer shape and surface texture, is shown in the first view 17 (modality one) , whereas the, for example, MRI data is used in the second view (modality two). According to the invention, it is also possible to combine the display of an alternative modality with a magnification, thus combining the effects shown in connection with figures 2B and 2C.
Figure 3 shows a first input device 12 according to an embodiment of the invention. The first input device 12 has multiple sides, which are each provided with markers 21 in certain pattern. The first input device 12 shown here is shaped like a dodecahedron, but the invention is not limited to similarly shaped first input devices. Essentially any object to which markers can be attached can be used as a first input device.
According to an embodiment, the object tracking unit of the display system is arranged to detect the markers on the first input device which are visible to the tracking unit's sensor. As was described before, the implementation may comprise an infrared light source, infrared reflecting markers, and an infrared sensor. The tracking processing unit has access to stored data regarding the locations of the markers on the first input device. The tracking processing unit is arranged to match this knowledge against the pattern of markers currently being detected, and from that work out the current position and orientation of the first input device. This is a known technique to determine a orientation and position of an object, and a skilled person will be able to implement this. A skilled person will also know how to make this system robust, for example, by handling the event of a marker being partially or fully covered by the hand of the user holding the first input device.
The invention is not limited to the use of markers as discussed in relation with figure 3 and before. For example, an alternative is a computer vision approach where a visible light camera follows the input device, and a processing unit determines the location and position of the input device from the tracked shape.
Figure 4 shows a second input device 13 according to an embodiment of the invention. Like the first input device discussed in relation with figure 3, the second input device 13 is provided with a number of markers in a specific pattern. The determination of the orientation and position of the second input device 13 is done as it is done for the first input device 12. Advantageously, the second input device has a shape that resembles a magnifying glass. This will make it distinct from the first input device 12, and also acts as a visual clue for the user regarding the function of the second input device.
In an embodiment according the invention, the display system provides a number of different second input devices. One might be shaped like a magnifying glass 13 as in figure 4, whereas another can be shaped like for example a triangular MRI warning symbol. Depending on which second input device is picked up, the system will use a different second view rendering mode. For example, if the user picks up the magnifying glass, the magnification mode as discussed in relation with figure 2B is used, whereas if the user picks up the MRI symbol, the cross-section mode as discussed in relation with figure 2C is used. Detection of which second input devices is picked by the user, might be achieved by applying a different pattern of markers to each of the second input devices.
Figure 5 shows an alternative form for a first input device according the invention, where the first input device is not shaped like the multi-sided object of figure 3, but actually shaped to resemble the original object of which a three dimensional model is displayed. This has the advantage that it is more intuitive, and thus easier, for the user to manipulate an object, the rotations and translations of which are applied to the viewed object representation, when that object resembles the viewed object
representation.
In an embodiment, a single display system may have access to three dimensional model data for a number of original objects. The display system may also be provided with a number of first input devices, each first input device being shaped like one of the original objects. When a user picks up one of the first input devices, for example removing it from a tray on the display system, the display system will register this event and load the three dimensional model data corresponding to the object which said first input device resembles into the rendering unit. Detection of which first input device was picked up, and therefore which model to load, might be accomplished, for example, by applying a different marker pattern to each of the first input devices.
In an embodiment, the display system may be provided with at least one additional display unit. Such an additional display unit can be used in various ways. For example in "slave mode", the additional display unit may receive the same view data as the (main) display unit. An advantage of such a setup is that it can be used to give presentations, where the viewers of the presentation watch the images on an additional display unit, and the (main) display unit is used by the presenter. In "multi mode", the additional display unit may receive additional view data, based on the state of additional first and second input devices, but representing the same three dimensional model of an object as the model visible on the main display unit.
While preferred embodiments of this invention have been shown and described, modifications thereof can be made by one skilled in the art without departing from the spirit or teaching of this invention. The embodiments described herein are exemplary only and are not limiting. Many variations and modifications of the system and apparatus are possible and are within the scope of the invention. Accordingly, the scope of protection is not limited to the embodiments described herein, but is only limited by the claims which follow, the scope of which shall include all equivalents of the matter of the claims.

Claims

1. Display system for displaying a three dimensional model of an object, the system comprising
- a first input device,
- a second input device,
- an input device interface, arranged to generate a first input signal representing a state of the first input device and a second input signal representing a state of the second input device,
- a processing unit, arranged to receive the first and second input signals from the input device interface and arranged to determine a first model rendering parameter set based on the first input signal and independent of the second input signal and a second model rendering parameter set based on the first and second input signals,
- a rendering unit arranged to receive the first model rendering parameter set and the second model rendering parameter set from the processing unit, to render as first view data a first part of the three dimensional model of the object according the first model rendering parameter set, to render as second view data a second part of the three dimensional model of the same object according the second model rendering parameter set, and to combine the first and second view data,
- a display unit, arranged to receive the combined view data from the rendering unit and to display said combined view data,
wherein the second view data is combined with the first view data in an overlapping manner.
2. Display system as claimed in claim 1, wherein the first view data is
representative of substantially the entire object, and the second view data is
representative of said second part of the object.
3. Display system as claimed in claim 1 or 2, wherein the second view data is combined with the first view data in such a manner, that the second view data is displayed at the display unit at a second view data display position relative to the first view data display position that essentially corresponds to the position of the first part of the object relative to the position of the second part of the object.
4. Display system according to any of the preceding claims, wherein the part of the object represented by the second view data substantially corresponds to the part of the same object that would be represented by the part of the first view data if the second view data did not overlap said part of the first view.
5. Display system according to any of the preceding claims, wherein the first input device and/or the second input device is formed as a passive device that is designed to be used while holding the device in one hand.
6. Display system according to any of the preceding claims, wherein the second view data shows a part of the object at a higher level of detail than the first view data.
7. Display system according to any of the preceding claims, wherein the second view data is rendered using three dimensional model data having a higher level of detail than the three dimensional model data used for rendering the first view data.
8. Display system according to any of the preceding claims, wherein the first and second view data are rendered using different data sets of the three dimensional model.
9. Display system according to any of the preceding claims, wherein the second view data is provided with a displayed symbol indicative of the difference between the second model rendering parameter set and the first model rendering parameter set.
10. Display system according to any of the preceding claims, wherein a
representation of the second input device is displayed on the display, and wherein the second view data is displayed substantially inside the area occupied and encompassed by the representation of the second input device, and the first view data is displayed substantially outside said area.
11. Display system according to any of the preceding claims, wherein the input device interface comprises an object tracking unit for tracking the position and/or orientation of the first and/or the second input device.
12. Display system according to claim 11, wherein the object tracking unit comprises an optical sensor and the first and/or second input device comprise optical reflecting markers.
13. Display system according to any of the preceding claims, wherein the three dimensional model of an object represents a physical object.
14. Display system according to any of the preceding claims, wherein the three dimensional model of an object is based on medical imaging data.
15. Display system according to any of the preceding claims, wherein the three dimensional model of an object is based on seismic data.
16. Display system according to any of the preceding claims, wherein the state represented by first input signal is the orientation and position of the first input device and the state represented by the second input signal is the orientation and position of the second input device.
17. Display system according to any of the preceding claims, wherein the first input device has a substantially dodecahedron shape.
18. Display system according to any of the preceding claims, wherein the first input device is a replica of the object represented by the three dimensional model of an object.
19. Display system according to any of the preceding claims, wherein the second input device is shaped like a magnifying glass.
20. Display system according to any of the preceding claims, comprising a plurality of first input devices, and wherein the system is arranged to detect which first input device is selected by a user.
21. Display system according to claim 20, wherein the rendering unit has access to a plurality of three dimensional models, and is arranged to select a three dimensional model of an object for rendering based on the user selection of a first input device.
22. Display system according to any of the preceding claims, comprising a plurality of second input devices, and wherein the system is arranged to detect which second input device is picked up by a user.
23. Display system according to claim 22, wherein the rendering unit has access to a plurality of second model rendering parameter sets, and is arranged to select a second model rendering parameter set based on the user selection of a second input device.
24. Display system according to any of the preceding claims, wherein the display unit comprises a display device capable of conveying a depth impression or a three dimensional display device.
25. Display system according to any of the preceding claims, comprising at least an additional display unit, wherein said additional display unit is arranged to display the same three dimensional model of an object as the display unit.
26. Display system according to any of the preceding claims, comprising the object which is represented by the three dimensional model of an object.
27. Method for displaying a three dimensional model of an object, comprising the steps of
- generating a first input signal representing a state of a first input device
- generating a second input signal representing a state of a second input device,
- determining a first model rendering parameter set based on the first input signal, - determining a second model rendering parameter set, based on the first input signal and on the second input signal,
- rendering first view data of the three dimensional model of an object according the first model rendering parameter set,
- rendering second view data of the three dimensional model of the same object according the second model rendering parameter set,
- combining the first and second view data in an overlapping manner into combined view data,
- displaying the view data.
PCT/NL2009/050607 2009-10-08 2009-10-08 Display system and method for displaying a three dimensional model of an object WO2011043645A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/NL2009/050607 WO2011043645A1 (en) 2009-10-08 2009-10-08 Display system and method for displaying a three dimensional model of an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/NL2009/050607 WO2011043645A1 (en) 2009-10-08 2009-10-08 Display system and method for displaying a three dimensional model of an object

Publications (1)

Publication Number Publication Date
WO2011043645A1 true WO2011043645A1 (en) 2011-04-14

Family

ID=41572416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2009/050607 WO2011043645A1 (en) 2009-10-08 2009-10-08 Display system and method for displaying a three dimensional model of an object

Country Status (1)

Country Link
WO (1) WO2011043645A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2624117A3 (en) * 2012-02-06 2014-07-23 Honeywell International Inc. System and method providing a viewable three dimensional display cursor
WO2014180797A1 (en) * 2013-05-07 2014-11-13 Commissariat à l'énergie atomique et aux énergies alternatives Method for controlling a graphical interface for displaying images of a three-dimensional object
US9443446B2 (en) 2012-10-30 2016-09-13 Trulnject Medical Corp. System for cosmetic and therapeutic training
US9792836B2 (en) 2012-10-30 2017-10-17 Truinject Corp. Injection training apparatus using 3D position sensor
US9922578B2 (en) 2014-01-17 2018-03-20 Truinject Corp. Injection site training system
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
US10290232B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
US10650703B2 (en) 2017-01-10 2020-05-12 Truinject Corp. Suture technique training system
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20060008119A1 (en) * 2004-06-01 2006-01-12 Energid Technologies Visual object recognition and tracking

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040046736A1 (en) * 1997-08-22 2004-03-11 Pryor Timothy R. Novel man machine interfaces and applications
US20060008119A1 (en) * 2004-06-01 2006-01-12 Energid Technologies Visual object recognition and tracking

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CUTLER L D ET AL: "TWO-HANDED DIRECT MANIPULATION ON THE RESPONSIVE WORKBENCH", PROCEEDINGS OF 1997 SYMPOSIUM ON INTERACTIVE 3 D GRAPHICS 27-30 APRIL 1997 PROVIDENCE, RI, USA; [PROCEEDINGS OF THE SYMPOSIUM ON INTERACTIVE 3D GRAPHICS], PROCEEDINGS 1997 SYMPOSIUM ON INTERACTIVE 3D GRAPHICS ACM NEW YORK, NY, USA LNKD- DOI:10.1145/2, 27 April 1997 (1997-04-27), pages 107 - 114, XP000725362, ISBN: 978-0-89791-884-8 *
HINCKLEY K ET AL: "Two Handed Virtual Manipulation", ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, ACM, NEW YORK, NY, US LNKD- DOI:10.1145/292834.292849, vol. 5, no. 3, 1 September 1998 (1998-09-01), pages 260 - 302, XP002226128, ISSN: 1073-0516 *
JI-SUN KIM ET AL: "A Tangible User Interface System for CAVE Applicat", VIRTUAL REALITY, 2006. IEEE ALEXANDRIA, VA, USA 25-29 MARCH 2006, PISCATAWAY, NJ, USA,IEEE LNKD- DOI:10.1109/VR.2006.21, 25 March 2006 (2006-03-25), pages 261 - 264, XP010933836, ISBN: 978-1-4244-0224-3 *
LOOSER J ET AL: "Through the looking glass: The use of lenses as an interface tool for augmented reality interfaces", PROCEEDINGS GRAPHITE 2004 - 2ND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES IN AUSTRALASIA AND SOUTHEAST ASIA PROCEEDINGS GRAPHITE 2004 - 2ND INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES IN AUS, 15 June 2004 (2004-06-15), pages 204 - 211, XP007913305 *
PETRIDIS P ET AL: "USABILITY EVALUATION OF THE EPOCH MULTIMODAL USER INTERFACE: DESIGNING 3D TANGIBLE INTERACTIONS", VRST'06. ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE & TECHNOLOGY. LIMASSOL, CYPRUS, NOV. 1 - 3, 2006; [ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY], NEW YORK, NY : ACM, US LNKD- DOI:10.1145/1180495.1180521, 1 November 2006 (2006-11-01), pages 116 - 122, XP001505743, ISBN: 978-1-59593-321-8 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2624117A3 (en) * 2012-02-06 2014-07-23 Honeywell International Inc. System and method providing a viewable three dimensional display cursor
US10643497B2 (en) 2012-10-30 2020-05-05 Truinject Corp. System for cosmetic and therapeutic training
US9443446B2 (en) 2012-10-30 2016-09-13 Trulnject Medical Corp. System for cosmetic and therapeutic training
US9792836B2 (en) 2012-10-30 2017-10-17 Truinject Corp. Injection training apparatus using 3D position sensor
US11854426B2 (en) 2012-10-30 2023-12-26 Truinject Corp. System for cosmetic and therapeutic training
US11403964B2 (en) 2012-10-30 2022-08-02 Truinject Corp. System for cosmetic and therapeutic training
US10902746B2 (en) 2012-10-30 2021-01-26 Truinject Corp. System for cosmetic and therapeutic training
WO2014180797A1 (en) * 2013-05-07 2014-11-13 Commissariat à l'énergie atomique et aux énergies alternatives Method for controlling a graphical interface for displaying images of a three-dimensional object
FR3005517A1 (en) * 2013-05-07 2014-11-14 Commissariat Energie Atomique METHOD FOR CONTROLLING A GRAPHICAL INTERFACE FOR DISPLAYING IMAGES OF A THREE-DIMENSIONAL OBJECT
US9912872B2 (en) 2013-05-07 2018-03-06 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for controlling a graphical interface for displaying images of a three-dimensional object
US9922578B2 (en) 2014-01-17 2018-03-20 Truinject Corp. Injection site training system
US10896627B2 (en) 2014-01-17 2021-01-19 Truinjet Corp. Injection site training system
US10290231B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10290232B2 (en) 2014-03-13 2019-05-14 Truinject Corp. Automated detection of performance characteristics in an injection training system
US10235904B2 (en) 2014-12-01 2019-03-19 Truinject Corp. Injection training tool emitting omnidirectional light
US10500340B2 (en) 2015-10-20 2019-12-10 Truinject Corp. Injection system
US10743942B2 (en) 2016-02-29 2020-08-18 Truinject Corp. Cosmetic and therapeutic injection safety systems, methods, and devices
US10648790B2 (en) 2016-03-02 2020-05-12 Truinject Corp. System for determining a three-dimensional position of a testing tool
US10849688B2 (en) 2016-03-02 2020-12-01 Truinject Corp. Sensory enhanced environments for injection aid and social training
US11730543B2 (en) 2016-03-02 2023-08-22 Truinject Corp. Sensory enhanced environments for injection aid and social training
US10650703B2 (en) 2017-01-10 2020-05-12 Truinject Corp. Suture technique training system
US10269266B2 (en) 2017-01-23 2019-04-23 Truinject Corp. Syringe dose and position measuring apparatus
US11710424B2 (en) 2017-01-23 2023-07-25 Truinject Corp. Syringe dose and position measuring apparatus

Similar Documents

Publication Publication Date Title
WO2011043645A1 (en) Display system and method for displaying a three dimensional model of an object
US9201568B2 (en) Three-dimensional tracking of a user control device in a volume
KR101823182B1 (en) Three dimensional user interface effects on a display by using properties of motion
CN104471511B (en) Identify device, user interface and the method for pointing gesture
CN101779460B (en) Electronic mirror device
US7965304B2 (en) Image processing method and image processing apparatus
Tomioka et al. Approximated user-perspective rendering in tablet-based augmented reality
US20160343166A1 (en) Image-capturing system for combining subject and three-dimensional virtual space in real time
CN105659295A (en) Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
WO2014190106A1 (en) Hologram anchoring and dynamic positioning
CN105074617A (en) Three-dimensional user interface device and three-dimensional operation processing method
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
TW201246088A (en) Theme-based augmentation of photorepresentative view
EP2847616B1 (en) Surveying apparatus having a range camera
KR101892735B1 (en) Apparatus and Method for Intuitive Interaction
CN109564703B (en) Information processing apparatus, information processing method, and computer-readable storage medium
KR20140081840A (en) Motion controlled list scrolling
US11562545B2 (en) Method and device for providing augmented reality, and computer program
CN115335894A (en) System and method for virtual and augmented reality
CN109844600A (en) Information processing equipment, information processing method and program
JP2004272515A (en) Interface method, device, and program
JPH0628452A (en) Three-dimensional image processor
JP4493082B2 (en) CG presentation device, program thereof, and CG display system
Hashimoto et al. Three-dimensional information projection system using a hand-held screen
CN115953557A (en) Product display method based on virtual reality technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09741028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09741028

Country of ref document: EP

Kind code of ref document: A1