US20060183096A1 - Interactive teaching and learning device with three-dimensional model - Google Patents

Interactive teaching and learning device with three-dimensional model Download PDF

Info

Publication number
US20060183096A1
US20060183096A1 US10/541,295 US54129505A US2006183096A1 US 20060183096 A1 US20060183096 A1 US 20060183096A1 US 54129505 A US54129505 A US 54129505A US 2006183096 A1 US2006183096 A1 US 2006183096A1
Authority
US
United States
Prior art keywords
model
touched
force
electronic storage
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/541,295
Inventor
Robert Riener
Rainer Burgkart
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of US20060183096A1 publication Critical patent/US20060183096A1/en
Priority to US12/397,758 priority Critical patent/US8403677B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/30Anatomical models

Definitions

  • the invention relates to a device which allows to preferably explain and demonstrate three-dimensional objects, e. g. anatomical models or even models and exhibits for museums and fairs.
  • the model to be touched has to be processed. As a result the model may be changed or even damaged. Further, in order to achieve a sufficient level of space resolution over the whole of the concerned model area, a plurality of sensors sensitive to pressure have to be used.
  • a 3D body incorporating the model is fastened to the adjacencies by at least one multi-component electrical force-torque measurement device.
  • the forces and torques arising are converted into electrical measurement signals which are leaded to an electronic storage and evaluation system.
  • an electronic storage and evaluation system In the electronic storage and evaluation system a mathematical model of the geometry of the 3D body is implemented.
  • geometry means at least each surface area of the model which can be contacted and which is to be explained, i. e. also body hollows of an anatomical model.
  • the calculated place of the contact is indicated or displayed by means of an indicating device.
  • the mode of indication and/or output is optional and is executed in accordance with the purpose to be achieved.
  • Optic-visual and/or acoustic indicating devices as known from the state of the art are preferred.
  • the forces and torques detected by the multiple-component force-torque measurement device are compared with the data stored in the data table.
  • the place touched is detected and displayed by the indicating device.
  • the invention in accordance with claim 2 can practically detect the pre-taught points only.
  • the model is fastened to a table, a wall, a ceiling or any other base surface by only one multiple-component force-torque measurement device. For the reason of a better mechanical stability even several force measurement devices may be used. Multiple-component force-torque measurement devices are part of the state of the art and are commercially offered as modular components. Additional holding appliances may also be used if required by the dimensions of the 3D body. These holding appliances, however, must be constructed in a way to unambiguously and reproducibly feed the force caused by the touch to the force-torque measurement device or the force-torque measurement devices.
  • the touch-sensitive sensor system is not positioned at the touch point of the model but is arranged as connecting element between the model and the adjacencies. For this reason there is no need to expensively adapt the model. Furthermore nearly any number of touch points may be generated, which is not possible with regard to the devices of the known art.
  • the construction as mentioned above allows to visually and/or acoustically explain, describe or accentuate the areas, points of elements of the model touched by the operator.
  • the details shown may be the name, certain properties and functions of the area or element of the identified model.
  • the details are made readable or visually recognizable by means of a visual display unit, and/or audible by means of loudspeakers.
  • films or graphic animations can be imported depending on what kind of setting and operating activities have been made.
  • the amount of the force detected and the direction of the force detected can be further processed by the data processor and reproduced as a graphically animated vector arrow or as an acoustic signal. If for example, the operator applies too high forces to the model a visual or acoustic warning signal or a warning voice may ensure that the operator stops applying force to the model so as to avoid a destruction of the model or the force sensor.
  • the mathematical representation of the used model can be determined by means of 3D-scanners (CT, magnetic resonance tomography, laser scanner etc.) and stored in a data processor.
  • CT computed tomography
  • laser scanner etc.
  • the relative areas of the model are touched, and the thereby arising forces and torques are measured and stored and assigned, for example by the input of texts.
  • the assignment method can be supported by up-to-date techniques such as artificial neural networks.
  • the element touched is detected automatically.
  • the geometric image of the model can also be represented in a graphically animated way.
  • certain areas of the model which are touched can be marked by colour or by means of an arrow.
  • Even very fine details which are positioned near the touch point but cannot be marked on the real model for lack of space can be visualized by means of the visual display unit.
  • menu points which optically differ in colour, size, shape, inscription can be marked. If one of these menu points is touched, depending on the kind of the point a certain reaction is released or menu function is executed which is displayed acoustically or graphically.
  • touch patters are for example: long or short contacts, light or strong contact pressing, as well as tapping signs with varying numbers of taps such as the double click in the Windows programme which leads to the opening of a file.
  • the invention can be operated in two different modes.
  • the above mentioned function represents the so-called standard mode, in which the touch results in a graphic and/or acoustic response.
  • a graphic or acoustic request can be put to the operator such as to touch a certain area of the model.
  • the operator e. g. a student to be examined, touches the supposed area, and the data processor checks whether the correct area has been touched, i. e. detected.
  • the data processor checks whether the correct area has been touched, i. e. detected.
  • Success, failure or a valuation are then communicated to the operator by means of the graphic and/or acoustic display. By using this mode the operator's knowledge is tested.
  • the optic-visual indicating device includes a projector which projects visual data such as texts or images directly to the area touched, which also allows to project the reverse sides. It is required, however, that the colour and the surface of the model area are adjusted to match the projection. If, for example, the operator with growing force presses the lung of the model, more low-lying sections are projected and represented. It is known to the specialist that such projections can be shown on separate monitors as well.
  • the projector is provided as video projector. This, for example, allows to show the blood transportation in the lung in a way very similar to reality, thus further improving the informative effect.
  • FIG. 1 a - f show the application of the invention to a model of an anatomic torso.
  • FIG. 2 shows the application of the invention to a model ear for the training in acupuncture.
  • FIG. 3 shows an embodiment with divided model.
  • FIG. 4 a, b show an embodiment of the invention for a non-medical application.
  • FIG. 1 a shows an artificial open upper part of a body 1 (phantom torso) with dismountable organs.
  • the invention serves to support the medical training.
  • the torso is mounted on a 6-component force-torque sensor 2 .
  • the sensor data lead to a data processing unit with graphic and acoustic output.
  • On the individual organs there are several small dots in yellow, blue and green colour. If, for example, a student of medicine touches one of the organs or a certain area of an organ, the name of the relative organ or area is communicated to him acoustically. Simultaneously a monitor shows the torso as artificial image in a shaded way and the name of the area touched is inserted.
  • the touched structures can be accentuated in colour. Even very fine anatomic structures, such as blood vessels, veinlets, lines of nerves, base points of muscles, can be made visible.
  • an operator touches the yellow dot on the artificial organ of the torso a photorealistic view of the organ or the area of the organ is represented to him on the monitor.
  • the blue dot the physiological relevance and possible pathologies are graphically and acoustically described. After all the green dot allows to start the graphic animation and films with sound. Further by an increase in pressure on an organ or the skin of the torso model it becomes possible to dip into the depth like a pin prick. As a result various body sections and internal sights are graphically represented in an animated way.
  • an artificial voice can request the operator to touch a certain area which is relevant from the anatomic point of view.
  • the place touched is then recorded by the data processing unit and the result is acoustically and graphically communicated and commented to the operator.
  • FIG. 1 b shows the operator removing one of the organs from the torso.
  • the sensor records an amended weight and a shifting of the centre of gravity.
  • the sensor automatically detects the organ which has been removed.
  • the artificial display of the torso on the monitor adjusts itself according to the amended torso.
  • FIG. 1 c shows how after the removal of several parts of organs more low-lying structures that have not been visible so far become visible now and can be explored further by touching them and by means of acoustic-graphic support.
  • FIG. 1 d shows a different graphic and acoustic display using a head-mounted-display (HMD).
  • HMD head-mounted-display
  • FIG. 1 e shows a different graphic display in which the text and image data are projected directly to the touched model.
  • This can be realized by means of a commercial projection beamer, in which case as for this example the model surface is to be white or unicoloured in a light colour.
  • FIG. 1 f shows an embodiment in which the phantom torso is fastened by two multiple-component sensors 2 a, 2 b.
  • the relative force-torque signals are vectorially added up and finally further processed by the data processing unit as sum signal which corresponds to the signal of one single sensor.
  • FIG. 2 shows an embodiment in which a phantom ear is utilized for the acupuncture training.
  • the phantom ear is connected with a force-torque sensor 2 .
  • the ear shows marks of the most important acupuncture positions. If the operator by means of a sharp-pointed object which is similar to an acupuncture pin touches the phantom ear, a voice and the monitor image tell him the name and the effect of the aimed dot. In this example of application the acoustic information and the text insertions are meaningful also for the reason that there is not enough space on the ear for the names and effects of the dots. Sound and image can also guide the operator when he looks for a desired dot. Further it is also possible to check how much time is taken by the operator to look for a certain dot and in which sequence he approaches these dots.
  • FIG. 3 shows an embodiment in which the model is divided.
  • the right model part is connected with the table by a force-torque sensor 2 a.
  • the left model part is connected with the right model part by means of a further force-torque sensor 2 b.
  • the sensor 2 b is the only connecting element between the right and the left model parts.
  • This arrangement also allows ambidextrous pointing activities.
  • the forces active at the left part can unambiguously be further processed by the connecting sensor 2 b.
  • the sensor 2 a on the side of the table receives the forces of both model parts, it is necessary for the localization of the right contact point that both sensor ends are coupled to each other.
  • the force-torque data of the connecting sensor are component for component subtracted, i. e. vectorially, from the force-torque data of the sensor on the side of the table (in a common coordinate system).
  • FIG. 4 a shows a model car mounted on a 6-component force-torque sensor 2 .
  • the force-torque data are leaded to a data processing unit which has an acoustic output facility by means of a sound generator (sound card).
  • the data processing unit includes a mathematical image of the model car geometry.
  • the model cars is composed of a plurality of small components, such as wheels, doors, bumpers, headlights. As soon as the operator (visitor of a museum) shortly touches one of the components by his finger, he hears the name of the touched component by means of loudspeakers. If he two times in a row quickly taps the same element, its function is explained to him in more detail.
  • the monitor Simultaneously with the output of the acoustic information the monitor shows an animated image of the model with a coloured accentuation of the touched part and a text box which explains the function in more detail.
  • One single long tapping starts a short film which describes the manufacturing process of the touched part.
  • FIG. 4 b shows an embodiment in which the model car is fastened by two multiple components 2 a, 2 b.
  • the relative force-torque signals are vectorially added and finally as a sum signal which corresponds to the signal of a single sensor further processed by the data processing unit.

Abstract

The invention relates to a device which allows to preferably explain and demonstrate three-dimensional objects, such as anatomic models or also models and exhibits for museums and fairs. According to the invention the model 1 is fastened to the adjacencies by at least one multiple-component force-torque measurement device 2, includes an electronic storage and evaluation unit and an optic-visual and/or acoustic indicating device. The force-torque measurement device 2 converts the forces and moments arising when the model 1 is touched into electrical measurement signals to be leaded to the electronic storage and evaluation unit, and in the electronic storage and evaluation unit the contact zone is calculated from the forces and torques detected as a result of the touch, and is communicated to the operator as a signal by means of the optic-visual and/or acoustic indicating device.

Description

  • The invention relates to a device which allows to preferably explain and demonstrate three-dimensional objects, e. g. anatomical models or even models and exhibits for museums and fairs.
  • In the field of medical training or in medical demonstrations anatomical models of plastic material or other materials are frequently used. For the explanation or accentuation of certain anatomical areas it is often advisable to mark the relative areas by inscriptions or coloured signs.
  • The problem with regard to such models is that for the reason of lack of space the information to be imparted by means of an inscription must not be very voluminous. In many cases the inscription is completely omitted, as, for example, the texture of the model (colouring, fine vessels, nerves etc., are to remain recognizable. Therefore the names and informative details belonging to the individual areas of a model are listed on a sheet of paper. The assignment follows from numbers indicated on the model, or from sketches or photos which show the relative areas of the model. Therefore the identification of the model areas of interest is often very complicated and unclear.
  • The same problems apply to the construction of three-dimensional demonstration models shown in museums or at fairs, in which cases—contrary to medical models—even original objects, such as an oldtimer vehicle in an automobile museum, may be concerned.
  • Regarding these museum and fair models it may also be advisable to make inscriptions or coloured marks for illustrating, describing or accentuating certain areas or elements of the model. For this purpose electrical switches are often used which—after being touched on the model or away from it—ensure that a certain area of the model becomes visible by means of small incandescent lamps or is explained by means of an inscription lighted up. So-called touchpads are used for special applications allowing the detection of a flat force distribution on the basis of sensing elements arranged in a matrix array, please see also DE 36 42 088 C2. The disadvantage of such arrangements is that there are sensor components between the touched model and the operator so that the original contact properties, such as surface condition, shape and colour, are distorted. Furthermore, owing to the mounting of the sensor components the model to be touched has to be processed. As a result the model may be changed or even damaged. Further, in order to achieve a sufficient level of space resolution over the whole of the concerned model area, a plurality of sensors sensitive to pressure have to be used.
  • These disadvantages are partly removed by the usage of so-called navigation or tracking systems which detect the contact point not on the side of the model but on the side of the operator, e. g. by tracking the operator's finger or instrument. The range of equipment required for the detection of the operator's movement, however, is excessive.
  • Therefore it is the object of the invention to provide improved models for learning and demonstrating purposes which, above all, overcome the above mentioned disadvantages.
  • This task is solved by a device according to claims 1 and 2:
  • According to claim 1 a teaching and learning device showing the following characteristics is provided: A 3D body incorporating the model is fastened to the adjacencies by at least one multi-component electrical force-torque measurement device. When the 3D body is touched the forces and torques arising are converted into electrical measurement signals which are leaded to an electronic storage and evaluation system. In the electronic storage and evaluation system a mathematical model of the geometry of the 3D body is implemented. Hereinafter geometry means at least each surface area of the model which can be contacted and which is to be explained, i. e. also body hollows of an anatomical model.
  • Furthermore an algorithm known as such from the state of the art is implemented, which calculates the place at the 3D body just being touched, for example by a finger or a needle, from the forces and torques detected as the result of the contact.
  • The calculated place of the contact is indicated or displayed by means of an indicating device. The mode of indication and/or output is optional and is executed in accordance with the purpose to be achieved. Optic-visual and/or acoustic indicating devices as known from the state of the art are preferred.
  • The invention according to claim 2 as an invention on its own is subordinated to the same basic idea as the invention according to claim 1.
  • The fundamental difference, however, is that no mathematical model is stored in the electronic storage and evaluation system, but a data table in which the contact points of interest are stored.
  • These contact points are implemented by means of the “teaching” method known from the state of the art, which means that the place to be “taught” on or in the 3D body (for example a body hollow) is touched by a finger or an instrument, thereby applying a predetermined force which is transferred to the multiple-component force-torque measurement device.
  • The forces and torques detected by the multiple-component force-torque measurement device are compared with the data stored in the data table. By means of an assignment algorithm the place touched is detected and displayed by the indicating device. Contrary to the invention according to claim 1, which, on principle detects any point as far as it is covered by the mathematical model, the invention in accordance with claim 2 can practically detect the pre-taught points only.
  • The model is fastened to a table, a wall, a ceiling or any other base surface by only one multiple-component force-torque measurement device. For the reason of a better mechanical stability even several force measurement devices may be used. Multiple-component force-torque measurement devices are part of the state of the art and are commercially offered as modular components. Additional holding appliances may also be used if required by the dimensions of the 3D body. These holding appliances, however, must be constructed in a way to unambiguously and reproducibly feed the force caused by the touch to the force-torque measurement device or the force-torque measurement devices.
  • The outstanding feature compared with the devices hitherto known is that the touch-sensitive sensor system is not positioned at the touch point of the model but is arranged as connecting element between the model and the adjacencies. For this reason there is no need to expensively adapt the model. Furthermore nearly any number of touch points may be generated, which is not possible with regard to the devices of the known art.
  • The construction as mentioned above allows to visually and/or acoustically explain, describe or accentuate the areas, points of elements of the model touched by the operator. For example the details shown may be the name, certain properties and functions of the area or element of the identified model. The details are made readable or visually recognizable by means of a visual display unit, and/or audible by means of loudspeakers. Also films or graphic animations can be imported depending on what kind of setting and operating activities have been made. Further the amount of the force detected and the direction of the force detected can be further processed by the data processor and reproduced as a graphically animated vector arrow or as an acoustic signal. If for example, the operator applies too high forces to the model a visual or acoustic warning signal or a warning voice may ensure that the operator stops applying force to the model so as to avoid a destruction of the model or the force sensor.
  • The mathematical representation of the used model can be determined by means of 3D-scanners (CT, magnetic resonance tomography, laser scanner etc.) and stored in a data processor. When the teaching method is used the relative areas of the model are touched, and the thereby arising forces and torques are measured and stored and assigned, for example by the input of texts. In this case the assignment method can be supported by up-to-date techniques such as artificial neural networks. As soon as in the course of the later application forces arise which are comparable with those measured in the teaching process, the element touched is detected automatically.
  • The geometric image of the model can also be represented in a graphically animated way. In the animation certain areas of the model which are touched can be marked by colour or by means of an arrow. Even very fine details which are positioned near the touch point but cannot be marked on the real model for lack of space can be visualized by means of the visual display unit.
  • On the model or within certain predetermined areas of the model several distinguishable menu points which optically differ in colour, size, shape, inscription can be marked. If one of these menu points is touched, depending on the kind of the point a certain reaction is released or menu function is executed which is displayed acoustically or graphically.
  • Alternatively or in addition to the points which are optically distinguishable, certain touch patterns with typical force/time behaviours may lead to various graphic and acoustic responses. Such touch patters are for example: long or short contacts, light or strong contact pressing, as well as tapping signs with varying numbers of taps such as the double click in the Windows programme which leads to the opening of a file.
  • The invention can be operated in two different modes. The above mentioned function represents the so-called standard mode, in which the touch results in a graphic and/or acoustic response. In the so-called inquiry mode at first a graphic or acoustic request can be put to the operator such as to touch a certain area of the model. Thereupon the operator, e. g. a student to be examined, touches the supposed area, and the data processor checks whether the correct area has been touched, i. e. detected. As a result it is further possible to verify whether the operator has contacted the areas in the right order and, if required, also in the correct periods of time and by applying the correct amounts and directions of forces. Success, failure or a valuation are then communicated to the operator by means of the graphic and/or acoustic display. By using this mode the operator's knowledge is tested.
  • According to claim 3 the optic-visual indicating device includes a projector which projects visual data such as texts or images directly to the area touched, which also allows to project the reverse sides. It is required, however, that the colour and the surface of the model area are adjusted to match the projection. If, for example, the operator with growing force presses the lung of the model, more low-lying sections are projected and represented. It is known to the specialist that such projections can be shown on separate monitors as well.
  • According to claim 4 the projector is provided as video projector. This, for example, allows to show the blood transportation in the lung in a way very similar to reality, thus further improving the informative effect.
  • Further it is to be mentioned that there is a number of intelligent algorithms for the evaluation of the signals of the force-torque measurement device. In case of a dismountable anatomic model, for example, the remaining mass is reduced when an organ is removed. Therefore, if the masses of the dismountable organs are different and known, it is possible to determine the dismounted organ by a simple weight classification. It is further possible to utilize the shifting of the centre of gravity of the model on removal of an organ for the determination. If a certain organ is removed, on principle the force-torque measurement device does not only record a reduction in weight but also a tilting moment. To minimize the possibility of confusion it is further possible to provide for algorithms for plausibility checks. Consequently, if, for example, two organs are of the same weight, however, are positioned one behind the other and therefore can be removed only in the predetermined order, the organ just removed can be clearly identified.
  • Now the description of the invention will be made at greater detail by means of examples of embodiments and schematic drawings:
  • FIG. 1 a-f show the application of the invention to a model of an anatomic torso.
  • FIG. 2 shows the application of the invention to a model ear for the training in acupuncture.
  • FIG. 3 shows an embodiment with divided model.
  • FIG. 4 a, b show an embodiment of the invention for a non-medical application.
  • FIG. 1 a shows an artificial open upper part of a body 1 (phantom torso) with dismountable organs. In this embodiment the invention serves to support the medical training. The torso is mounted on a 6-component force-torque sensor 2. The sensor data lead to a data processing unit with graphic and acoustic output. On the individual organs there are several small dots in yellow, blue and green colour. If, for example, a student of medicine touches one of the organs or a certain area of an organ, the name of the relative organ or area is communicated to him acoustically. Simultaneously a monitor shows the torso as artificial image in a shaded way and the name of the area touched is inserted. By the way of graphic animation the touched structures can be accentuated in colour. Even very fine anatomic structures, such as blood vessels, veinlets, lines of nerves, base points of muscles, can be made visible. If then an operator touches the yellow dot on the artificial organ of the torso, a photorealistic view of the organ or the area of the organ is represented to him on the monitor. In case of the blue dot the physiological relevance and possible pathologies are graphically and acoustically described. After all the green dot allows to start the graphic animation and films with sound. Further by an increase in pressure on an organ or the skin of the torso model it becomes possible to dip into the depth like a pin prick. As a result various body sections and internal sights are graphically represented in an animated way. In the inquiry mode (control mode) an artificial voice can request the operator to touch a certain area which is relevant from the anatomic point of view. The place touched is then recorded by the data processing unit and the result is acoustically and graphically communicated and commented to the operator.
  • FIG. 1 b shows the operator removing one of the organs from the torso. As a result the sensor records an amended weight and a shifting of the centre of gravity. As the weights of the individual components are known, the sensor automatically detects the organ which has been removed. Thereupon the artificial display of the torso on the monitor adjusts itself according to the amended torso.
  • FIG. 1 c shows how after the removal of several parts of organs more low-lying structures that have not been visible so far become visible now and can be explored further by touching them and by means of acoustic-graphic support.
  • FIG. 1 d shows a different graphic and acoustic display using a head-mounted-display (HMD). By the projection of two separate images to both eyes a realistic three-dimensional image impression is achieved. The acoustic message is communicated to the operator by means of stereo headphones.
  • FIG. 1 e shows a different graphic display in which the text and image data are projected directly to the touched model. This can be realized by means of a commercial projection beamer, in which case as for this example the model surface is to be white or unicoloured in a light colour.
  • FIG. 1 f shows an embodiment in which the phantom torso is fastened by two multiple- component sensors 2 a, 2 b. The relative force-torque signals are vectorially added up and finally further processed by the data processing unit as sum signal which corresponds to the signal of one single sensor.
  • FIG. 2 shows an embodiment in which a phantom ear is utilized for the acupuncture training. The phantom ear is connected with a force-torque sensor 2. The ear shows marks of the most important acupuncture positions. If the operator by means of a sharp-pointed object which is similar to an acupuncture pin touches the phantom ear, a voice and the monitor image tell him the name and the effect of the aimed dot. In this example of application the acoustic information and the text insertions are meaningful also for the reason that there is not enough space on the ear for the names and effects of the dots. Sound and image can also guide the operator when he looks for a desired dot. Further it is also possible to check how much time is taken by the operator to look for a certain dot and in which sequence he approaches these dots.
  • FIG. 3 shows an embodiment in which the model is divided. This means that the right model part is connected with the table by a force-torque sensor 2 a. The left model part, however, is connected with the right model part by means of a further force-torque sensor 2 b. The sensor 2 b is the only connecting element between the right and the left model parts. By this arrangement 2 forces—one per model part—can be initiated and localized. This arrangement also allows ambidextrous pointing activities. During the data processing the forces active at the left part can unambiguously be further processed by the connecting sensor 2 b. As, however, the sensor 2 a on the side of the table receives the forces of both model parts, it is necessary for the localization of the right contact point that both sensor ends are coupled to each other. For this purpose the force-torque data of the connecting sensor are component for component subtracted, i. e. vectorially, from the force-torque data of the sensor on the side of the table (in a common coordinate system).
  • FIG. 4 a shows a model car mounted on a 6-component force-torque sensor 2. The force-torque data are leaded to a data processing unit which has an acoustic output facility by means of a sound generator (sound card). The data processing unit includes a mathematical image of the model car geometry. The model cars is composed of a plurality of small components, such as wheels, doors, bumpers, headlights. As soon as the operator (visitor of a museum) shortly touches one of the components by his finger, he hears the name of the touched component by means of loudspeakers. If he two times in a row quickly taps the same element, its function is explained to him in more detail. Simultaneously with the output of the acoustic information the monitor shows an animated image of the model with a coloured accentuation of the touched part and a text box which explains the function in more detail. One single long tapping starts a short film which describes the manufacturing process of the touched part.
  • FIG. 4 b shows an embodiment in which the model car is fastened by two multiple components 2 a, 2 b. The relative force-torque signals are vectorially added and finally as a sum signal which corresponds to the signal of a single sensor further processed by the data processing unit.
  • It is obvious that instead of the model car also a real object such as an automobile can be equipped with the invention. The particular significance in the fields of application: museum, exhibition or fair, without doubt, consists in the novel interaction of the exhibited object with the public that up to this time often has not been allowed to touch the exhibits.

Claims (4)

1. An interactive teaching and learning device which comprises
a 3D body (1) to be touched which is fastened to the adjacencies by means of at least one
multiple-component force-torque measurement device (2),
an electronic storage and evaluation system,
an optic-visual and/or acoustic indicating device, whereby
the force-torque measurement device converts the forces and moments arising when the model body is touched into electrical measurement signals to be leaded to the electronic storage and evaluation unit, while
a mathematical model of the geometry of the 3D body is implemented in the electronic storage and evaluation unit, and
an algorithm which on the basis of the forces and torques detected when the touch is carried out calculates the contact zone at the 3D body, which is communicated to the touching operator as signal by means of the optic-visual and/or acoustic indicating device.
2. A teaching and learning device which comprises
a 3D body (1) to be touched which is fastened to the adjacencies by at least one
multiple-component force-torque measurement device (2),
an electronic storage and evaluation unit,
an optic-visual and/or acoustic indicating device, whereby
the force-torque measurement device converts the forces and moments arising when the model body is touched into electrical measurement signals to be leaded to the electronic storage and evaluation unit,
force-torque measurement signals of predetermined contact points are stored in the memory of the electronic storage and evaluation unit, and
an assignment algorithm is implemented which based on the detected forces and torques assigns the contact zone at the 3D body which is communicated to the touching operator as signal by means of the optic-visual and/or acoustic indicating device.
3. A teaching and learning device according to claim 1 or 2 characterized in that the optic-visual indicating device comprises a projector projecting visual data, such as texts or images, directly to the area touched.
4. A teaching and learning device according to claim 3 characterized in that the projector is a video projector.
US10/541,295 2002-12-31 2003-12-31 Interactive teaching and learning device with three-dimensional model Abandoned US20060183096A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/397,758 US8403677B2 (en) 2002-12-31 2009-03-04 Interactive teaching and learning device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE10261673.6 2002-12-31
DE10261673A DE10261673A1 (en) 2002-12-31 2002-12-31 Interactive teaching and learning device
PCT/DE2003/004292 WO2004061797A1 (en) 2002-12-31 2003-12-31 Interactive teaching and learning device with three-dimensional model

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/397,758 Continuation US8403677B2 (en) 2002-12-31 2009-03-04 Interactive teaching and learning device

Publications (1)

Publication Number Publication Date
US20060183096A1 true US20060183096A1 (en) 2006-08-17

Family

ID=32519512

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/541,295 Abandoned US20060183096A1 (en) 2002-12-31 2003-12-31 Interactive teaching and learning device with three-dimensional model
US12/397,758 Expired - Fee Related US8403677B2 (en) 2002-12-31 2009-03-04 Interactive teaching and learning device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/397,758 Expired - Fee Related US8403677B2 (en) 2002-12-31 2009-03-04 Interactive teaching and learning device

Country Status (6)

Country Link
US (2) US20060183096A1 (en)
EP (1) EP1579406B1 (en)
CN (1) CN1745404B (en)
AU (1) AU2003303603A1 (en)
DE (1) DE10261673A1 (en)
WO (1) WO2004061797A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060183099A1 (en) * 2005-02-14 2006-08-17 Feely Richard A Education and test preparation system, method and computer program product
US20080233550A1 (en) * 2007-01-23 2008-09-25 Advanced Fuel Research, Inc. Method and apparatus for technology-enhanced science education
US20090081627A1 (en) * 2007-09-26 2009-03-26 Rose Marie Ambrozio Dynamic Human Model
US20100129781A1 (en) * 2008-11-21 2010-05-27 National Taiwan University Electrical bronze acupuncture statue apparatus
US20100159434A1 (en) * 2007-10-11 2010-06-24 Samsun Lampotang Mixed Simulator and Uses Thereof
US20120156665A1 (en) * 2009-06-11 2012-06-21 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Real-Time X-Ray Vision for Healthcare Simulation
JP2012148059A (en) * 2010-12-27 2012-08-09 Kochi Univ Of Technology Biological image processing system equipped with tangible device
US8966681B2 (en) 2013-02-26 2015-03-03 Linda L. Burch Exercise mat
US20160240102A1 (en) * 2015-02-12 2016-08-18 Vikram BARR System for audio-tactile learning with reloadable 3-dimensional modules
FR3044121A1 (en) * 2015-11-19 2017-05-26 Univ Paris 1 Pantheon-Sorbonne EQUIPMENT OF INCREASED REALITY AND TANGIBLE INTERFACE
US9773347B2 (en) 2011-11-08 2017-09-26 Koninklijke Philips N.V. Interacting with a three-dimensional object dataset
US20180040261A1 (en) * 2016-08-03 2018-02-08 Megaforce Company Limited Human body cavity model
US10097817B2 (en) * 2016-08-03 2018-10-09 MEGAFORCE COMPANY LlMlTED Double-image projection device projecting double images onto 3-dimensional ear canal model
US10325522B2 (en) * 2012-01-27 2019-06-18 University of Pittsburgh—of the Commonwealth System of Higher Education Medical training system and method of employing
US10354555B2 (en) * 2011-05-02 2019-07-16 Simbionix Ltd. System and method for performing a hybrid simulation of a medical procedure

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7736149B2 (en) * 2004-09-29 2010-06-15 Towliat Faye F Operating room display and related methods
DE102005062611A1 (en) * 2005-12-23 2007-06-28 Burgkart, Rainer, Dr. med. Stimulation device for stimulating penetration process of e.g. needle, has drive device controlled based on signals such that pressure and/or haptic sensation is provided during pressing of instruments against body by surgeon
CN102314779B (en) * 2010-06-30 2013-11-27 上海科技馆 Poultry embryo growth-to-hatching process demonstration device and demonstration method thereof
DE102013019563B4 (en) 2013-11-22 2021-11-18 Audi Ag Method for providing information about an environment to a smart device
CN105160977A (en) * 2015-08-05 2015-12-16 成都嘉逸科技有限公司 Human body anatomy 3D teaching system
WO2018195946A1 (en) * 2017-04-28 2018-11-01 深圳迈瑞生物医疗电子股份有限公司 Method and device for displaying ultrasonic image, and storage medium
CN109729325A (en) * 2017-10-30 2019-05-07 王雅迪 Realize the live fluoroscopic system of dynamically track rotation exhibition model
CN110619797A (en) * 2018-06-20 2019-12-27 天津小拇指净化技术有限公司 Intelligent operating room demonstration teaching system
US10410542B1 (en) * 2018-07-18 2019-09-10 Simulated Inanimate Models, LLC Surgical training apparatus, methods and systems
CN115465006A (en) * 2022-10-21 2022-12-13 西安外事学院 Laser relief image blind person touchable visual perception realization method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5376948A (en) * 1992-03-25 1994-12-27 Visage, Inc. Method of and apparatus for touch-input computer and related display employing touch force location external to the display
US5400661A (en) * 1993-05-20 1995-03-28 Advanced Mechanical Technology, Inc. Multi-axis force platform
US6141000A (en) * 1991-10-21 2000-10-31 Smart Technologies Inc. Projection display system with touch sensing on screen, computer assisted alignment correction and network conferencing
US6597347B1 (en) * 1991-11-26 2003-07-22 Itu Research Inc. Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US6915709B2 (en) * 2003-03-31 2005-07-12 Wacoh Corporation Force detection device
US20050246109A1 (en) * 2004-04-29 2005-11-03 Samsung Electronics Co., Ltd. Method and apparatus for entering information into a portable electronic device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3742935A (en) * 1971-01-22 1973-07-03 Humetrics Corp Palpation methods
US4254562A (en) * 1979-02-14 1981-03-10 David Murray Combination cardiovascular electronic display/teaching apparatus, system and methods of constructing and utilizing same
DE3638192A1 (en) * 1986-11-08 1988-05-19 Laerdal Asmund S As SYSTEM AND METHOD FOR TESTING A PERSON IN CARDIOPULMONARY RESURRECTION (CPR) AND EVALUATING CPR EXERCISES
DE3642088A1 (en) * 1986-12-10 1988-06-23 Wolfgang Brunner ARRANGEMENT FOR MEASURING POWER DISTRIBUTION
WO1991004553A2 (en) * 1989-09-18 1991-04-04 Paolo Antonio Grego Edizioni S.A.S. An indicating panel for facilitating the acquisition of information and/or localization of selected points on a map
US5259764A (en) * 1991-04-29 1993-11-09 Goldsmith Bruce W Visual display apparatus for the display of information units and related methods
DE10017119A1 (en) * 2000-04-06 2001-10-31 Fischer Brandies Helge Device for measuring forces acting on teeth, teeth models and/or implants has sensors for detecting spatial force and/or torque components on teeth, model tooth crowns, dentures, implants
AUPR118100A0 (en) * 2000-11-02 2000-11-23 Flinders Technologies Pty Ltd Apparatus for measuring application of pressure to an imitation body part
DE10217630A1 (en) * 2002-04-19 2003-11-13 Robert Riener Method and device for learning and training dental treatment methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141000A (en) * 1991-10-21 2000-10-31 Smart Technologies Inc. Projection display system with touch sensing on screen, computer assisted alignment correction and network conferencing
US6597347B1 (en) * 1991-11-26 2003-07-22 Itu Research Inc. Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US5376948A (en) * 1992-03-25 1994-12-27 Visage, Inc. Method of and apparatus for touch-input computer and related display employing touch force location external to the display
US5400661A (en) * 1993-05-20 1995-03-28 Advanced Mechanical Technology, Inc. Multi-axis force platform
US6915709B2 (en) * 2003-03-31 2005-07-12 Wacoh Corporation Force detection device
US20050246109A1 (en) * 2004-04-29 2005-11-03 Samsung Electronics Co., Ltd. Method and apparatus for entering information into a portable electronic device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060183099A1 (en) * 2005-02-14 2006-08-17 Feely Richard A Education and test preparation system, method and computer program product
US20080233550A1 (en) * 2007-01-23 2008-09-25 Advanced Fuel Research, Inc. Method and apparatus for technology-enhanced science education
US20090081627A1 (en) * 2007-09-26 2009-03-26 Rose Marie Ambrozio Dynamic Human Model
US8469715B2 (en) * 2007-09-26 2013-06-25 Rose Marie Ambrozio Dynamic human model
US20100159434A1 (en) * 2007-10-11 2010-06-24 Samsun Lampotang Mixed Simulator and Uses Thereof
US20100129781A1 (en) * 2008-11-21 2010-05-27 National Taiwan University Electrical bronze acupuncture statue apparatus
US20120156665A1 (en) * 2009-06-11 2012-06-21 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Real-Time X-Ray Vision for Healthcare Simulation
US9053641B2 (en) * 2009-06-11 2015-06-09 University of Pittsburgh—of the Commonwealth System of Higher Education Real-time X-ray vision for healthcare simulation
JP2012148059A (en) * 2010-12-27 2012-08-09 Kochi Univ Of Technology Biological image processing system equipped with tangible device
US10354555B2 (en) * 2011-05-02 2019-07-16 Simbionix Ltd. System and method for performing a hybrid simulation of a medical procedure
US9773347B2 (en) 2011-11-08 2017-09-26 Koninklijke Philips N.V. Interacting with a three-dimensional object dataset
US10325522B2 (en) * 2012-01-27 2019-06-18 University of Pittsburgh—of the Commonwealth System of Higher Education Medical training system and method of employing
US8966681B2 (en) 2013-02-26 2015-03-03 Linda L. Burch Exercise mat
US20160240102A1 (en) * 2015-02-12 2016-08-18 Vikram BARR System for audio-tactile learning with reloadable 3-dimensional modules
FR3044121A1 (en) * 2015-11-19 2017-05-26 Univ Paris 1 Pantheon-Sorbonne EQUIPMENT OF INCREASED REALITY AND TANGIBLE INTERFACE
US20180040261A1 (en) * 2016-08-03 2018-02-08 Megaforce Company Limited Human body cavity model
US10097817B2 (en) * 2016-08-03 2018-10-09 MEGAFORCE COMPANY LlMlTED Double-image projection device projecting double images onto 3-dimensional ear canal model
US10115321B2 (en) * 2016-08-03 2018-10-30 Megaforce Company Limited Human body cavity model

Also Published As

Publication number Publication date
WO2004061797A1 (en) 2004-07-22
US8403677B2 (en) 2013-03-26
EP1579406B1 (en) 2016-07-06
US20090162823A1 (en) 2009-06-25
CN1745404A (en) 2006-03-08
AU2003303603A1 (en) 2004-07-29
DE10261673A1 (en) 2004-07-15
EP1579406A1 (en) 2005-09-28
CN1745404B (en) 2010-06-23

Similar Documents

Publication Publication Date Title
US8403677B2 (en) Interactive teaching and learning device
Landau Early map use as an unlearned ability
US20200242961A1 (en) Phonics exploration toy
US6428323B1 (en) Medical examination teaching system
US9396669B2 (en) Surgical procedure capture, modelling, and editing interactive playback
TW440807B (en) Operator training system
US8696363B2 (en) Method and apparatus for simulation of an endoscopy operation
CN105096670B (en) A kind of intelligent immersion tutoring system and device for nose catheter operation real training
KR940007721A (en) Interactive aircraft training device and method
ATE210941T1 (en) DEVICE AND METHOD FOR DISPLAYING THREE-DIMENSIONAL DATA OF AN ULTRASONIC ECHOGRAPHY DEVICE
DE69923060D1 (en) ENDOSCOPIC TUTORIAL SYSTEM
US9463359B2 (en) Rehabilitation assistance system
WO2012160999A1 (en) Stethoscopy training system and simulated stethoscope
EP3789989B1 (en) Laparoscopic simulator
WO2009008750A1 (en) Endoscope simulator
WO1999017265A1 (en) Method and apparatus for surgical training and simulating surgery
TWI687904B (en) Interactive training and testing apparatus
Varano et al. Design and evaluation of a multi-sensory representation of scientific data
CN112991854A (en) Ultrasonic teaching method, device and system and electronic equipment
Sokolowski et al. Developing a low-cost multi-modal simulator for ultrasonography training
WO2022122834A1 (en) Computer implemented method and system for mapping spatial attention
Murai et al. An indoor-walk-guide simulator using a haptic interface
Power On the accuracy of tactile displays
KR20230129707A (en) Realistic ultrasound examination simulation program and system
Leader et al. Surgical Simulator for Endoscopic Carpal Tunnel Surgery

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION