WO2009068838A1 - Virtual human interaction system - Google Patents

Virtual human interaction system Download PDF

Info

Publication number
WO2009068838A1
WO2009068838A1 PCT/GB2007/050719 GB2007050719W WO2009068838A1 WO 2009068838 A1 WO2009068838 A1 WO 2009068838A1 GB 2007050719 W GB2007050719 W GB 2007050719W WO 2009068838 A1 WO2009068838 A1 WO 2009068838A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
appearance
virtual human
patient
human
Prior art date
Application number
PCT/GB2007/050719
Other languages
French (fr)
Inventor
Stephen Chapman
Luke Bracegirdle
Original Assignee
Keele University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Keele University filed Critical Keele University
Priority to AU2007361697A priority Critical patent/AU2007361697B2/en
Priority to PCT/GB2007/050719 priority patent/WO2009068838A1/en
Publication of WO2009068838A1 publication Critical patent/WO2009068838A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Definitions

  • This invention relates to a virtual human interaction system, and more particularly to a virtual human interaction system capable of being provided over local or disparate computer networks to users at one or more terminals and whereat users are presented with a situation involving one or more humans, which are virtually represented onscreen at said terminal, and with which said users must interact by providing one or more inputs at the terminal.
  • this invention relates to a virtual patient system ideally intended for trainee medical practitioners to help them to learn or enhance their diagnostic skills based on a simulated doctor/patient scenario which is virtually represented on screen.
  • the following description relates almost exclusively to the use of the invention in the medical industry for the purpose specified, the reader will instantly become aware that the invention has a potentially much wider application in the training and education fields generally, and therefore the invention should be considered as encompassing such applications.
  • Virtual education and/or training systems which involve some type of background computer program coupled with images and/or video files (e.g. mpeg, avi and the like) for display on-screen are well established.
  • such systems can be provided both locally, in terms of being provided and loaded on an individual, stand-alone, non- networked PC, and in distributed fashion, whereby the system is stored centrally and delivered either physically in terms of being downloadable to suitably networked PCs, or virtually in terms of the program being executable at the server side and the results of the execution (which is to some extent controlled by the user input at the networked PC) are then transmitted by HTML or other suitable format so that the display on the user's PC can be caused to change as program execution continues.
  • HTML or other suitable format so that the display on the user's PC can be caused to change as program execution continues.
  • doctor and patient may be represented by actors, and in the case of systems where video footage is provided to users, such actors would be previously instructed how to behave during filming according to the particular notional plight of the patient, e.g. the actor playing the patient is told to limp as a result of having a notional sprained ankle.
  • This system is typical of many available on the web, in that a student is presented with a patient case to read, optionally provided with some patient medical history or medical records, and is then presented with a number of related options.
  • Such systems are fundamentally limited in that they can relate only to one possible situation. For example, the user will be presented with a case to which the photos or video footage used are exclusively appropriate. Additionally, the text used in describing the case is most likely to be hard-coded into the website being provided, with the result that a total re-design is required if such systems are to be useful in training users in other situations.
  • VHI Interface
  • the system features realtime photo-realistic digital replicas of multiple individuals capable of talking, acting and showing emotions and over 60 different facial expressions. These "virtual patients" appear in a high-performance virtual reality environment featuring full panoramic backgrounds, animated 3D objects, behaviour and A.I. models, a complete vision system for supporting interaction and advanced animation interfaces.
  • the VHI takes advantage of the latest advances in computer graphics. As such, it allows medical researchers and practitioners to create real-time responsive virtual humans for their experiments using computer systems priced under $2000.
  • the virtual patients can talk, act and express a wide range of facial expressions, emotions, and body gestures. Their motions and actions can be imported from MPEG4 or motion capture files or animated.
  • An additional scripting layer allows researchers to use their own scripting controls implemented in XML, HTML, LUA, TCL/TK or TCP/IP.
  • This system developed by the University of Huddersfield, UK, creates a "virtual hospital" in HTML and other code, and is a computer-based learning tool for health care professionals that simulates the care context for patients within the environmental context of a General Hospital.
  • the system has been built from components that reflect typical aspects of a care environment e.g. patients, patient assessment forms, observations records etc.
  • the components provide the facilitator with the means to support complex and imaginative patient based scenarios to satisfy key learning outcomes.
  • the system is innovative in design and delivery of course materials as it encourages students to engage with nursing matter through vignettes and patient based scenarios within the context of application to patient care; and allows students to explore wider issues relating to management of the care environment through duty roster and resource management and exploration of evidence based practice.
  • a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
  • a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen
  • a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
  • system causes the appearance of the virtual human to change by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
  • a method of data processing in which there is provided a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
  • a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen
  • a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
  • the appearance of the virtual human is changed by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
  • Animations are invoked at a code-level based on the interaction with the user.
  • This results in an asynchronous dialogue between the user and virtual human in the examples given below, reference will be made to "virtual patient”, but it will be understood that embodiments of the invention may apply to situations other than healthcare applications), which allows the tool to be used without the need for a second human operator.
  • Embodiments of the present invention allow the interchanging of patients for a specific case, for example by changing the sex, ethnicity, age or apparent social background of the virtual patient.
  • embodiments of the present invention differ in their ability to convert real- world experience and published evidence into a machine readable format. It is this format that invokes the suitable animation and audio from the virtual patient, based on the user's interaction.
  • the research basis for this design is fundamentally founded on the development of decision analysis technology, which allows this real-world information to be stored in an efficient manner.
  • embodiments of the present invention add an additional step by adding to the process the ability to convert real-world information into a machine readable format, by way of decision analysis techniques.
  • a decision engine may be provided so as to parse a data file (e.g. in human-readable XML format or other human-readable format) and to convert this into a machine-readable format using decision analysis techniques.
  • embodiments of the present invention differ in a third way, by their ability to simulate the evidence for a large cross-population of experts.
  • the collective point of view, or individual points of view, of a large population of experts may be used as data for decision analysis and outcomes of certain decisions made by the user. This has the added benefit providing the user with tailored feedback based on decisions taken in line with the evidence and experience related to a virtual patient's case.
  • a fundamental advantage of this invention is its ability to identify starkly, through the appearance of the on-screen virtual patient, exactly what the effect on such a patient would have been in real-life had the user acted in the way he did during the virtual case study provided by the system. For example, if case offered the option to the user of prescribing various drugs to treat the virtual patient's condition, and the user chose the wrong drug, the system could, almost in real-time, display the (possibly fatal) effects of incorrect prescription. For instance, a set of descriptors (possibly code fragments, mini- applications, or other graphics tools) could be applied to the virtual patient to cause the displayed figure to faint, vomit, turn different shades of colours, become blotchy, sweat, become feverish, collapse, and possibly ultimately, die.
  • descriptors possibly code fragments, mini- applications, or other graphics tools
  • descriptors may be configured to simulate embarrassment, pain, relief, happiness, sadness, anger etc. in response to particular questions or classes of questions raised by, or particular actions taken by, the user when interacting with the system.
  • the virtual patient preferably takes the form (to the user) of an animated avatar, preferably rendered so as to have a three dimensional appearance (albeit, with current technology, on a two dimensional display).
  • Embodiments of the present invention also allow for the provision of feedback from the virtual patient based on the routes taken through the decision tree. This allows users to receive advice on how their decision path differed from published evidence or peers (for example) via a range of feedback tools.
  • this feature is implemented in various ways, ranging from a text transcript of the decision path to the virtual patient 'speaking' to the user at the conclusion of the virtual patient consultation and providing a critique of the user's performance.
  • feedback may include a presentation of data, by the virtual patient, that was not elicited from the system by the user during interaction with the virtual patient.
  • the virtual patient may (after the consultation) present an explanation as to what the correct path though the decision tree should have been, and why.
  • a student or other user may interact with the system in a variety of ways, depending on the platform chosen for development of a virtual patient case.
  • Web-based cases can use multiple-choice questions or textual analysis of free text inputted by the student or user.
  • commercial speech recognition software may be employed to allow voice interaction with the virtual patient.
  • speech or voice recognition and processing capability which in itself is known, and which will therefore not be described in full detail
  • the system may accordingly include a microphone or the like and speech recognition processor for input of voice commands and questions.
  • FIGURE 1 provides a diagrammatic representation of the system as a whole
  • FIGURE 2 shows a possible decision tree structure suitable for a case involving a patient who is an asthma sufferer.
  • the first step in designing a case is Patient Selection.
  • designing a new case for use with the system according to the invention it needs to focus on a single patient. Different patients can be used for individual cases, and information on other people (e.g. family members) can be provided in the branch element/terminus descriptors if relevant.
  • the system requires information about the patient such as their description (gender, age, height, weight etc.), previous medical history and any social history. It is perceived by the applicant herefor that after a number of cases have been designed, the system may be extended to develop a 'patient population' - a small set of patients that can be perceived as members of a virtual community. Such a resource would allow case designers to select a patient from the patient population or examine the effect of their decision across the entire virtual patient population.
  • a particular case is created by designing a number of scenarios that are linked by the decisions that can be taken.
  • This relates to the decision tree aspect of the invention, which may most usefully be mapped out in a flowchart or organisational chart, such as that shown in Figure 2.
  • Each of the boxes can be thought of as branch elements (i.e. elements from which a branch extends or is possible) or termini (i.e. from which no further branch is possible).
  • branch elements i.e. elements from which a branch extends or is possible
  • termini i.e. from which no further branch is possible.
  • each box there is provided some text indicative of the patients physical state at that stage in the diagnosis procedure.
  • Also provided in each branch element are a series of options or other means by which a user can enter information or make a selection. This user input is then analysed by the system to allow it to determine, according to the decision tree, which branch element to display next.
  • a case is made up of many scenarios which need to be described individually to support the decisions that can be taken. To begin writing a case, it is necessary to consider the following pieces of information for each scenario:
  • Audience The type of student for whom the case is designed (e.g. Pharmacy, Medical, Nursing students). As a case has many scenarios, the audience does not always have to be the same for each scenario. For example by changing the audience, it is possible to design a case to allow a group of students from various health disciplines to work together on a single case.
  • the system can easily be adapted to provide additional information in the form of attachments to the user, so word documents, http links, specific pictures and the like can be included, and the system can refer the student to support their decision for each scenario.
  • each decision may also be necessary to categorise each decision into three types. If the system is used for a healthcare application, for example, all decisions may typically be broadly categorised as (a) treating the patient, (b) referring the patient to another healthcare professional or (c) giving advice to the patient.
  • Multimedia With each scenario, it is possible (although not mandatory) to request visualisation of one or more key points of the scenario through virtual patient technologies incorporated into the system. This feature may invoke an animation based on the user's interaction with the system, and can therefore request a response from the virtual patient based on what has been said.
  • Anne has suffered with asthma since childhood, suffering 3 exacerbations in the past 12 months. She had a total hysterectomy at age 48, menorrhagia & prolapse FH of CVD. Her mother died following a stroke at age 76, prior to this she had a succession of TIAs and had moved in with Anne and her husband. Husband worked in management for the local coal board and was retired on grounds of ill health (arthritis) in 1996 (age 62).
  • Anne buys analgesics regularly from the local pharmacy for her husband (Co-Codamol 8/500) as he doesn't like to bother the GP for such 'minor' medications. Anne doesn't have any help at home, she does her own cooking and cleaning and when her asthma is ok her mobility and exercise tolerance is good. She was advised to increase her activity levels a few years ago and has started to walk the dog more since her husband is becoming increasingly unable to walk long distances without significant pain.
  • the system is designed as follows.
  • the system 2 consists of three main components to deliver the core functionality. These are referred to as:
  • the Patient Case File 4 - this is an XML based file that drives the content in each case.
  • the file format can be generated with supporting applications to allow case designers with little, or no technical knowledge to create new cases for use with the system.
  • This file uses a XML definition to allow the decision engine to parse the file and process its contents.
  • the files employ a decision tree to traverse the various scenarios a patient case may have, depending on the decisions taken within a case.
  • the Decision Engine 6 this is responsible for parsing the Patient Case File and rendering the content into a machine readable format.
  • the decision engine 6 is also responsible for calling external resources 8 that the case may need to render the case (e.g. documents, images, animations/sound files) and then formats the case back to the user via a standard output format (e.g. web page).
  • the external resources 8 also include the descriptors which can be applied to the virtual patient, the computer-readable representation of which is similarly retained in the database.
  • the engine also tracks the decisions taken by a user in each case and then passes this data onto a database 10 for recording. This information is then used when a user wishes to examine a transcript of what decisions the user made for a specific case.
  • the Database 10 the database is responsible for tracking decisions taken within each case (and ultimately to deliver feedback to the user, where the feedback functionality is provided) and to keep a record of the location of external resources that may be required to render a case (e.g. animation files).
  • the database is also referred to when a user wishes to recall their decisions within a case. This information is also used at a higher level, so that case designers can examine what type of decisions are being made in their case and if additional supporting information needs to be supplied to the user to improve the decision making process.
  • information is declared in the XML file as a series of special XML tags.
  • Each scenario is then declared via series of scenario tags that describe what is happening to the patient at this stage of the case. Typically, one would expect to see a series of scenario tags to make up the various scenarios of each case.
  • the decisions are mapped to paths within the decision tree to allow the case to traverse the tree correctly.
  • Each scenario is made anonymous by an identification (ID) value and referenced in the XML file thus:
  • a tag is also included in the XML file which calls an external multimedia resource, and in particular an emotional or physical descriptor file which can be applied to a default virtual human (e.g. avatar) in memory, in accordance with embodiments of the invention.
  • This may be an image file, sound file or an animation to cause the avatar to respond in a predefined way.
  • Such animation files need to be designed before the XML file can reference them.
  • animations are designed and can be invoked at a code level and applied to different patients. Therefore it is possible for the invention to call on a database of animations (using a combination of external and in-house developed multimedia resources) to invoke an emotion in the patient across a number of cases.
  • a virtual human interaction system for use on a PC or web enabled computer, which facilitates the training and education of users. Its initial application is directed to healthcare practitioners such as doctors, nurses, pharmacists and the like by allowing them to virtually interact with a virtual patient delivered by the system and displayed on a computer screen, although other applications outside the healthcare field may be envisaged.
  • the system embodies a plurality of cases, and for each case, there are a number of possible outcomes, depending on the choices made by the healthcare services practitioner at each stage in a particular case.
  • each case consists of a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, the user input cause the system to move through the decision tree of the case.
  • Each branch element and terminus includes descriptors of a particular condition of the virtual human at that time in the specific case, and these are displayed to the user at each specific stage in the case to provide the user with a current indication of the well being of the virtual patient.
  • the system causes the appearance of the virtual patient to change by applying one or more of said appearance descriptors to said virtual patient as the system moves through the decision tree in response to user input.
  • the resulting effect is to provide users with an almost real-time indication of their actions on patients.

Abstract

A virtual human interaction system is described for use on a web-enabled computer which facilitates the training and education of medical services practitioners such as doctors, nurses, pharmacists and the like by allowing them to virtually interact with a virtual patient delivered by the system and displayed on the computer screen. The system embodies a plurality of cases, and for each case, there are a number of possible outcomes, depending on the choices made by the medical services practitioner at each stage in a particular case. Such choices are made in the form of user input to the system through the computer, and each case consists of a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, the user input cause the system to move through the decision tree of the case.

Description

VIRTUAL HUMAN INTERACTION SYSTEM
This invention relates to a virtual human interaction system, and more particularly to a virtual human interaction system capable of being provided over local or disparate computer networks to users at one or more terminals and whereat users are presented with a situation involving one or more humans, which are virtually represented onscreen at said terminal, and with which said users must interact by providing one or more inputs at the terminal.
More specifically, this invention relates to a virtual patient system ideally intended for trainee medical practitioners to help them to learn or enhance their diagnostic skills based on a simulated doctor/patient scenario which is virtually represented on screen. Of course, while the following description relates almost exclusively to the use of the invention in the medical industry for the purpose specified, the reader will instantly become aware that the invention has a potentially much wider application in the training and education fields generally, and therefore the invention should be considered as encompassing such applications.
BACKGROUND
Virtual education and/or training systems which involve some type of background computer program coupled with images and/or video files (e.g. mpeg, avi and the like) for display on-screen are well established. Furthermore, such systems can be provided both locally, in terms of being provided and loaded on an individual, stand-alone, non- networked PC, and in distributed fashion, whereby the system is stored centrally and delivered either physically in terms of being downloadable to suitably networked PCs, or virtually in terms of the program being executable at the server side and the results of the execution (which is to some extent controlled by the user input at the networked PC) are then transmitted by HTML or other suitable format so that the display on the user's PC can be caused to change as program execution continues.
Indeed there are many examples of such systems. One example can be found at
Figure imgf000002_0001
and is entitled "The Interactive Patient". In this system, which is presented over the internet in HTML format, the user clicks through a series of stages, such as "History", "Physical Exam", "X-Ray Exam", and "Diagnosis", and with each stage that is clicked, a web page is presented to a user on which some explanatory text concerning the condition, symptoms, and medical history of a virtual patient is provided, together with a static, real-life photo image of a doctor greeting, examining, interrogating or otherwise dealing with a patient. Of course, both doctor and patient may be represented by actors, and in the case of systems where video footage is provided to users, such actors would be previously instructed how to behave during filming according to the particular notional plight of the patient, e.g. the actor playing the patient is told to limp as a result of having a notional sprained ankle.
This system is typical of many available on the web, in that a student is presented with a patient case to read, optionally provided with some patient medical history or medical records, and is then presented with a number of related options. Such systems are fundamentally limited in that they can relate only to one possible situation. For example, the user will be presented with a case to which the photos or video footage used are exclusively appropriate. Additionally, the text used in describing the case is most likely to be hard-coded into the website being provided, with the result that a total re-design is required if such systems are to be useful in training users in other situations. Indeed, in the case of the training of medical practitioners, it is almost imperative that they be exposed to as many different cases and patient diagnosis scenarios as is possible to provide them with as well rounded and comprehensive a training as is possible. In this regard, the type of system immediately previously described is wholly inadequate.
Other systems of this type can found at: http://courses.pharmacy.unc.edu/asthma/
Figure imgf000003_0001
http://research.bidmc.harvard.edu/VPTutorials/ http://radiography.derbv.ac.uk/NOS Conference/Dawn%20Skelton%202.pdf
As an advance on the above, it has been proposed to use virtual reality to enhance the training/user experience. A technical paper entitled "Virtual Patient: a Photo-real Virtual
Human for VR-based Therapy" by Bernadette KISS, Balazs BENEDEK, Gabor
SZIJARTO, Gabor CSUKLY, Lajos SIMON discussed a high fidelity Virtual Human
Interface (VHI) system which was developed using low-cost and portable computers.
The system features realtime photo-realistic digital replicas of multiple individuals capable of talking, acting and showing emotions and over 60 different facial expressions. These "virtual patients" appear in a high-performance virtual reality environment featuring full panoramic backgrounds, animated 3D objects, behaviour and A.I. models, a complete vision system for supporting interaction and advanced animation interfaces. The VHI takes advantage of the latest advances in computer graphics. As such, it allows medical researchers and practitioners to create real-time responsive virtual humans for their experiments using computer systems priced under $2000.
In this document, the creation of Computer generated, animated humans in real-time is used to address the needs of emerging virtual-reality based medical applications, such as CyberTherapy, virtual patients, and digital plastic surgery. The authors developed an open architecture, low-cost and portable virtual reality system, called the Virtual Human Interface, that employs high resolution, photo-real virtual humans animated in real-time to interact with patients. It is said that this system offers a unique platform for a broad range of clinical and research applications. Examples include virtual patients for training and interviewing, highly realistic 3D environments for cue exposure therapy, and digital faces as a means to diagnose and treat psychological disorders. Its open architecture and multiple layers of interaction possibilities make it ideal for creating controlled, repeatable and standardized medical VR solutions. By virtue of the system proposed, the virtual patients can talk, act and express a wide range of facial expressions, emotions, and body gestures. Their motions and actions can be imported from MPEG4 or motion capture files or animated. An additional scripting layer allows researchers to use their own scripting controls implemented in XML, HTML, LUA, TCL/TK or TCP/IP.
They state that the system also runs in a browser environment over the Internet and supports multiple ways, such a live video feed, remote teleconferencing and even a virtual studio module (chroma-key) for the therapist to enter the patient's virtual space whether locally or remotely.
Despite the obvious advantages of providing a virtual patient as described above, and in particular the utility of such a system in bringing virtual doctor/patient encounters to life on screen, there are still drawbacks in that this system requires the designer of particular cases to redesign each case with new video or motion-capture animations every time a new response is required from the virtual patient. As will immediately be appreciated, this represents a massive overhead, and a probably unworkable solution to the problem of providing trainees with a great variety of cases to study. Additionally, users of a system of this type would need to have a high level of technical skills in order to design a new patient case. However, a particular drawback is that a simulation of this type can only at best simulate the developer's point of view, or research findings. This raises questions about how to simulate accurately the collective viewpoints of a series of subject-area experts, or to simulate the current evidence-base for a specific domain and demonstrate to the learner the probable results their decisions in treating the patient (e.g. how do you simulate to the student that if their action in the simulation was taken with a real-life patient, it would have had a 63% probability to harm the patient). Other key messages based on published evidence for a certain therapeutic area (and not on personal opinion) need to be conceptualised in order for a simulation to take on a greater value
At this point, it will be beneficial to consider the existing work in the field of decision analysis. Systems based on decision analysis are currently known, and one example can be found at:
MfPl/J.wwwJiu^
This system, developed by the University of Huddersfield, UK, creates a "virtual hospital" in HTML and other code, and is a computer-based learning tool for health care professionals that simulates the care context for patients within the environmental context of a General Hospital.
The system has been built from components that reflect typical aspects of a care environment e.g. patients, patient assessment forms, observations records etc. The components provide the facilitator with the means to support complex and imaginative patient based scenarios to satisfy key learning outcomes. The system is innovative in design and delivery of course materials as it encourages students to engage with nursing matter through vignettes and patient based scenarios within the context of application to patient care; and allows students to explore wider issues relating to management of the care environment through duty roster and resource management and exploration of evidence based practice.
In this system however, cartoon-like animations are provided on-screen as opposed to virtual full-size patients, and although the system provides a good overall "feel", it requires each case to be designed ab initio in advance, rather than certain decisions each resulting in an ad hoc animation from a virtual patient. It is also deficient in that it allows little scope for extensive development, for example by being scalable to include vast numbers of doctor/patient cases or prognosis/diagnosis scenarios.
Another system is known from US 2005/0170323, which discloses a computer system in which normal data indicating normal conditions in a patient is stored together abnormality data received from an author, a medical knowledge base, and a mentoring knowledge base. An instance of a virtual patient is generated based on the normal data and the abnormality data, the instance describing a sufficiently comprehensive physical state of a patient having the abnormal condition to simulate clinical measurements of the patient's condition. Action data is received from a trainee who is different from the author. Action data indicates a requested action relevant to dealing with the instance. Response data is generated based on the action data and the instance. Display data is presented to the trainee based on the response data. The display data indicates information about the instance available as a result of the requested action. The system does not provide any teaching in relation to the generation, acquisition and utilisation of computer generated media that combine clinical experience and published evidence. Moreover, the system requires the active input of the author, whose online presence is therefore needed for full functionality.
A further disadvantage of all these systems is that none provide a medium whereby an educator can design his own cases for teaching purposes without some computer programming experience.
It is a primary object of embodiments of the present invention to provide a system which overcomes at least come of the disadvantages of the prior art and provides a means whereby a given scenario can be analysed by a student or the like, and then allowing some action to be taken (e.g. diagnosis and/or prognosis or changes to medication or lifestyle) with the effects of that action being visualised in real time in combination with virtual reality technology for representing humans on-screen to provide a remarkable learning experience. It is intended to combine some of the inherent benefits of current technology and propose an innovative system and method to provide an evidence- based simulation, that provides immediate feedback to the user via a virtual patient computer-generated character which is not limited to any particular platform, but may be implemented on various different platforms using the technology resident on many modern computer systems. BRIEF SUMMARY OF THE DISCLOSURE
According to a first aspect of the present invention, there is provided a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
i) a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen; and
ii) a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
wherein the system causes the appearance of the virtual human to change by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
According to a second aspect of the present invention, there is provided a method of data processing in which there is provided a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
i) a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen; and
ii) a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
wherein the appearance of the virtual human is changed by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
Further features of the invention are provided in the dependent claims appended hereto.
Embodiments of the present invention differ from the prior art systems discussed hereinbefore in several important aspects:
First, animations are invoked at a code-level based on the interaction with the user. This results in an asynchronous dialogue between the user and virtual human (in the examples given below, reference will be made to "virtual patient", but it will be understood that embodiments of the invention may apply to situations other than healthcare applications), which allows the tool to be used without the need for a second human operator. This increases the number of applications available to embodiments of the invention, as it can be used as a virtual patient 'on demand' or even as a virtual patient across a distributed network, such as the Internet. Embodiments of the present invention allow the interchanging of patients for a specific case, for example by changing the sex, ethnicity, age or apparent social background of the virtual patient. This opens up the field of study to evaluate decision making by the user should the patient differ in their gender, ethnicity, age, social background etc. Secondly, embodiments of the present invention differ in their ability to convert real- world experience and published evidence into a machine readable format. It is this format that invokes the suitable animation and audio from the virtual patient, based on the user's interaction. The research basis for this design is fundamentally founded on the development of decision analysis technology, which allows this real-world information to be stored in an efficient manner. Although the use of decision analysis on its own is not new, embodiments of the present invention add an additional step by adding to the process the ability to convert real-world information into a machine readable format, by way of decision analysis techniques. In particular, a decision engine may be provided so as to parse a data file (e.g. in human-readable XML format or other human-readable format) and to convert this into a machine-readable format using decision analysis techniques.
The primary benefit of including decision analysis into embodiments of the present invention is it allows the simulation of the real-world evidence in a setting that can be conceptualised easily (e.g. a patient sitting in a surgery and exhibiting side effects consistent with the published evidence). Therefore it should be noted that embodiments of the present invention differ in a third way, by their ability to simulate the evidence for a large cross-population of experts. In other words, rather than simulating the point of view of a single designer (or team of designers) for a virtual patient case based on a set of personal opinions, the collective point of view, or individual points of view, of a large population of experts (taken from, for example, published literature) may be used as data for decision analysis and outcomes of certain decisions made by the user. This has the added benefit providing the user with tailored feedback based on decisions taken in line with the evidence and experience related to a virtual patient's case.
A fundamental advantage of this invention is its ability to identify starkly, through the appearance of the on-screen virtual patient, exactly what the effect on such a patient would have been in real-life had the user acted in the way he did during the virtual case study provided by the system. For example, if case offered the option to the user of prescribing various drugs to treat the virtual patient's condition, and the user chose the wrong drug, the system could, almost in real-time, display the (possibly fatal) effects of incorrect prescription. For instance, a set of descriptors (possibly code fragments, mini- applications, or other graphics tools) could be applied to the virtual patient to cause the displayed figure to faint, vomit, turn different shades of colours, become blotchy, sweat, become feverish, collapse, and possibly ultimately, die. Of course, many other conditions can be defined and described in suitable code, tools, applications or other format compatible with the system. Moreover, by providing descriptors that replicate particular emotions, a more realistic and effective simulation of real-life can be obtained. For example, descriptors may be configured to simulate embarrassment, pain, relief, happiness, sadness, anger etc. in response to particular questions or classes of questions raised by, or particular actions taken by, the user when interacting with the system.
The virtual patient preferably takes the form (to the user) of an animated avatar, preferably rendered so as to have a three dimensional appearance (albeit, with current technology, on a two dimensional display).
It is worth mentioning at this time that the effects of the system on test candidates is so startling that most remember the experience very clearly. When compared to studying comparatively dry and dull textbooks, the system provides a marked improvement. One of the reasons behind this improvement is that the patient condition descriptors (i.e. "the faint", "the vomit", "the collapse", "the death") is independent of the particular virtual patient representation. Accordingly, any virtual reality figure can be incorporated into the system, and it is to this basic figure that the various conditions can be applied. Not only does this make the system very flexible (for instance, it is thus very simple to change the virtual representation from a man to a woman), but also it provides the system as a whole with advanced realism. For example, it could easily be possible to virtually represent someone the user knew in real life, which would further enhance the experience of using the system.
Embodiments of the present invention also allow for the provision of feedback from the virtual patient based on the routes taken through the decision tree. This allows users to receive advice on how their decision path differed from published evidence or peers (for example) via a range of feedback tools. In some embodiments, this feature is implemented in various ways, ranging from a text transcript of the decision path to the virtual patient 'speaking' to the user at the conclusion of the virtual patient consultation and providing a critique of the user's performance. In particular, feedback may include a presentation of data, by the virtual patient, that was not elicited from the system by the user during interaction with the virtual patient. For example, if the user makes an incorrect diagnosis, or advises an incorrect treatment, thereby taking a path along the decision tree that does not result in good treatment for the condition exhibited by the virtual patient, the virtual patient may (after the consultation) present an explanation as to what the correct path though the decision tree should have been, and why.
A student or other user may interact with the system in a variety of ways, depending on the platform chosen for development of a virtual patient case. Web-based cases can use multiple-choice questions or textual analysis of free text inputted by the student or user. In some embodiments of the present invention, commercial speech recognition software may be employed to allow voice interaction with the virtual patient. By providing speech or voice recognition and processing capability (which in itself is known, and which will therefore not be described in full detail), it is possible for a user to direct spoken questions to the system in such a way as to simulate a real-life interaction, with the avatar representation of the virtual human responding in various ways, for example talking and moving, to questions or instructions spoken by the user. With currently available speech recognition systems, some degree of training is required so that the system can process and understand a given user's speech, but it is expected that this will become less important in the relatively near future as speech recognition technology improves. The system may accordingly include a microphone or the like and speech recognition processor for input of voice commands and questions.
Although the general disclosure of preferred embodiments of the present invention is directed primarily to healthcare applications, other embodiments could equally be applied to any situation in education where there is an evidence-base or experience documented regarding the interaction with humans. Examples could include a simulator to improve communication skills with virtual customers, a simulator for helpdesk staff to advise virtual customers of a specific course of action (e.g. help-desk staff training) or even a simulator where the student takes on various other roles, such as a pharmacist speaking to a virtual doctor, a tutor speaking with a virtual student or a employee speaking to a virtual manager during an appraisal.
Throughout the description and claims of this specification, the words "comprise" and "contain" and variations of the words, for example "comprising" and "comprises", means "including but not limited to", and is not intended to (and does not) exclude other moieties, additives, components, integers or steps. Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers, characteristics, compounds, chemical moieties or groups described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention and to show how it may be carried into effect, reference shall now be made by way of example to the accompanying drawings, in which:
FIGURE 1 provides a diagrammatic representation of the system as a whole, and
FIGURE 2 shows a possible decision tree structure suitable for a case involving a patient who is an asthma sufferer.
DETAILED DESCRIPTION
As a first part of this description, the method by which cases are designed for the system is described.
The first step in designing a case is Patient Selection. In designing a new case for use with the system according to the invention, it needs to focus on a single patient. Different patients can be used for individual cases, and information on other people (e.g. family members) can be provided in the branch element/terminus descriptors if relevant.
During this phase of development, it is necessary to describe the patient's profile. The system requires information about the patient such as their description (gender, age, height, weight etc.), previous medical history and any social history. It is perceived by the applicant herefor that after a number of cases have been designed, the system may be extended to develop a 'patient population' - a small set of patients that can be perceived as members of a virtual community. Such a resource would allow case designers to select a patient from the patient population or examine the effect of their decision across the entire virtual patient population.
A particular case is created by designing a number of scenarios that are linked by the decisions that can be taken. This relates to the decision tree aspect of the invention, which may most usefully be mapped out in a flowchart or organisational chart, such as that shown in Figure 2. Each of the boxes can be thought of as branch elements (i.e. elements from which a branch extends or is possible) or termini (i.e. from which no further branch is possible). In each box, there is provided some text indicative of the patients physical state at that stage in the diagnosis procedure. Also provided in each branch element are a series of options or other means by which a user can enter information or make a selection. This user input is then analysed by the system to allow it to determine, according to the decision tree, which branch element to display next.
As can be seen in the Figure, a case will often have more than one final scenario, depending on the various options which are offered to and chosen by the user.
A case is made up of many scenarios which need to be described individually to support the decisions that can be taken. To begin writing a case, it is necessary to consider the following pieces of information for each scenario:
Audience The type of student for whom the case is designed (e.g. Pharmacy, Medical, Nursing students). As a case has many scenarios, the audience does not always have to be the same for each scenario. For example by changing the audience, it is possible to design a case to allow a group of students from various health disciplines to work together on a single case.
Description
Each scenario must be fully described as it will appear to the student (e.g. Anne walks into the pharmacy complaining of...). Additional Information
The system can easily be adapted to provide additional information in the form of attachments to the user, so word documents, http links, specific pictures and the like can be included, and the system can refer the student to support their decision for each scenario.
Decisions
Unless an outcome scenario is being described for a case, it is necessary to provide two or more decisions for each scenario described, together with branch information, i.e. where each decision should lead. A simple numbering scheme for the scenarios would allow one scenario to reference another. In this manner, it is of course possible to reuse scenarios, so that a number of decisions can result in the display of the same scenario.
In the system, it may also be necessary to categorise each decision into three types. If the system is used for a healthcare application, for example, all decisions may typically be broadly categorised as (a) treating the patient, (b) referring the patient to another healthcare professional or (c) giving advice to the patient.
Multimedia With each scenario, it is possible (although not mandatory) to request visualisation of one or more key points of the scenario through virtual patient technologies incorporated into the system. This feature may invoke an animation based on the user's interaction with the system, and can therefore request a response from the virtual patient based on what has been said.
An example of a case description, which be parsed by a Decision Engine so as to convert it into a machine-readable format, is provided below.
Patient Description
Retired Teacher, Caucasian, Weight = 88kg
Married, husband a heavy smoker
Two Children (Luke and Jessica), now 31 and 29
Anne has suffered with asthma since childhood, suffering 3 exacerbations in the past 12 months. She had a total hysterectomy at age 48, menorrhagia & prolapse FH of CVD. Her mother died following a stroke at age 76, prior to this she had a succession of TIAs and had moved in with Anne and her husband. Husband worked in management for the local coal board and was retired on grounds of ill health (arthritis) in 1996 (age 62).
Anne buys analgesics regularly from the local pharmacy for her husband (Co-Codamol 8/500) as he doesn't like to bother the GP for such 'minor' medications. Anne doesn't have any help at home, she does her own cooking and cleaning and when her asthma is ok her mobility and exercise tolerance is good. She was advised to increase her activity levels a few years ago and has started to walk the dog more since her husband is becoming increasingly unable to walk long distances without significant pain.
Scenario Number: 1
Audience: Pharmacy Students & Medical Students Only
Description
Anne has been asthmatic for many years. She has attended her review annually at the surgery and her prescription has stayed pretty constant for some time. There have been a number of acute exacerbations of her asthma in the past 12 months and she attends for review at the surgery following an Accident and Emergency admission 5 days previously...
Additional Information
[include word documents, links etc. to additional resources for student consideration]
Decisions
Category: (a) treating the patient increase Beclometasone to 250mcg 2 puffs bd MDI (goes to scenario 3)
Category: (c) giving advice to the patient check inhaler technique (goes to scenario 4)
Category: (a) treating the patient switch to Easi-breathe devices (goes to outcome scenario 2) Multimedia
Invokes an animation of patient attending her annual review and taking a peak flow measurement.
Feedback
[This section would be used to record the feedback that the user should be given, by virtue of navigating to this node in the decision tree. For example, if the node detailed the prescription of an incorrect medication, the feedback might contain information on recommended alternative drugs as well as advice that this action was not advised. Such feedback would be recorded and reported back to the user at the end of the consultation.]
[the description of the remaining 9 scenarios in this case is precluded here in the interest of brevity, but the format is generally similar to the above] .
In order to the present the pre-designed cases in an informative, useful and striking manner, the system is designed as follows.
Referring to Figure 1 , the system 2 consists of three main components to deliver the core functionality. These are referred to as:
(1 ) The Patient Case File 4 - this is an XML based file that drives the content in each case. The file format can be generated with supporting applications to allow case designers with little, or no technical knowledge to create new cases for use with the system.
This file uses a XML definition to allow the decision engine to parse the file and process its contents. The files employ a decision tree to traverse the various scenarios a patient case may have, depending on the decisions taken within a case.
(2) The Decision Engine 6 - this is responsible for parsing the Patient Case File and rendering the content into a machine readable format. The decision engine 6 is also responsible for calling external resources 8 that the case may need to render the case (e.g. documents, images, animations/sound files) and then formats the case back to the user via a standard output format (e.g. web page). In accordance with embodiments of the invention, the external resources 8 also include the descriptors which can be applied to the virtual patient, the computer-readable representation of which is similarly retained in the database.
The engine also tracks the decisions taken by a user in each case and then passes this data onto a database 10 for recording. This information is then used when a user wishes to examine a transcript of what decisions the user made for a specific case.
(3) The Database 10 - the database is responsible for tracking decisions taken within each case (and ultimately to deliver feedback to the user, where the feedback functionality is provided) and to keep a record of the location of external resources that may be required to render a case (e.g. animation files).
The database is also referred to when a user wishes to recall their decisions within a case. This information is also used at a higher level, so that case designers can examine what type of decisions are being made in their case and if additional supporting information needs to be supplied to the user to improve the decision making process.
At a technical level, to allow the decision engine to parse the XML file so that the system can provide this functionality, information is declared in the XML file as a series of special XML tags.
At the start of the file, a tag is declared identifying the patient the case applies to: <patient id=01 >Anne Phillips...
Each scenario is then declared via series of scenario tags that describe what is happening to the patient at this stage of the case. Typically, one would expect to see a series of scenario tags to make up the various scenarios of each case.
<scenario01 >Anne complains of breathlessness what do you do?
</scenario01 >
Within each scenario, additional information can be provided to the user (via hyperlinks) before they make their decision. This is declared in the file as follows: <scenaιϊo01 link url="CMP.doc">
CMP
</scenario01 link>
<scenario01 link url="http://www.sign.ac.uk"> SIGN/BTS Guidelines
</scenario01 link> <scenario01 link url="http://www.nice.org.uk">
NICE
</scenario01 link>
Decisions are then declared, being those decisions applicable to the particular scenario. Each of these decisions is categorised via the "Type" attribute and is recorded back to the database accordingly.
<scenario01 option type="a">increase Beclometasone to 250mcg... </scenario01 option>
<scenario01 option type="b">check inhaler technique </scenario01 option>
<scenario01 option type="c">switch to easi-breathe devices </scenario01 option>
As a next part of the file, the decisions are mapped to paths within the decision tree to allow the case to traverse the tree correctly. Each scenario is made anonymous by an identification (ID) value and referenced in the XML file thus:
<scenario01 path>02</scenario01 path> <scenario01 path>03</scenario01 path>
<scenario01 path>04</scenario01 path>
A tag is also included in the XML file which calls an external multimedia resource, and in particular an emotional or physical descriptor file which can be applied to a default virtual human (e.g. avatar) in memory, in accordance with embodiments of the invention. This may be an image file, sound file or an animation to cause the avatar to respond in a predefined way. This may involve using a file from an external media suppler and can be declared in the XML file as follows: <scenario01 resource file="patient01/emotions/pain.flv">02 </scenario01 resource> Such animation files need to be designed before the XML file can reference them. However animations are designed and can be invoked at a code level and applied to different patients. Therefore it is possible for the invention to call on a database of animations (using a combination of external and in-house developed multimedia resources) to invoke an emotion in the patient across a number of cases.
Thus, for common actions (e.g. smiling, angry, sad), these could be designed for all patients in one process and therefore allows for an extensive population of animations which the XML file can reference via this tag.
It is important to note at this point that supporting software applications can be used which guide a designer through writing a case. This software will automatically generate the XML required in a Patient Case File without the user being exposed to the raw XML file format. This allows a case designer to create his/her own case without requiring knowledge of specialist programming languages.
In summary therefore, a virtual human interaction system is described for use on a PC or web enabled computer, which facilitates the training and education of users. Its initial application is directed to healthcare practitioners such as doctors, nurses, pharmacists and the like by allowing them to virtually interact with a virtual patient delivered by the system and displayed on a computer screen, although other applications outside the healthcare field may be envisaged. The system embodies a plurality of cases, and for each case, there are a number of possible outcomes, depending on the choices made by the healthcare services practitioner at each stage in a particular case. Such choices are made in the form of user input to the system through the computer interface, and each case consists of a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, the user input cause the system to move through the decision tree of the case. Each branch element and terminus includes descriptors of a particular condition of the virtual human at that time in the specific case, and these are displayed to the user at each specific stage in the case to provide the user with a current indication of the well being of the virtual patient. Together with the virtual patient displayed by the system, also incorporated into the system are a plurality of appearance descriptors which can be applied to the virtual patient by the system so as to cause a change in the appearance thereof, said descriptors being based on real-life human conditions which affect the physical appearance of humans generally and which are thus mimicked in the virtual patient. In accordance with the invention, the system causes the appearance of the virtual patient to change by applying one or more of said appearance descriptors to said virtual patient as the system moves through the decision tree in response to user input. The resulting effect is to provide users with an almost real-time indication of their actions on patients.

Claims

CLAIMS:
1. A virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
i) a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen; and
ii) a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
wherein the system causes the appearance of the virtual human to change by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
2. A system as claimed in claim 1 , wherein the system causes the appearance of the virtual human to change simultaneously with the change in position of the system within the decision tree, thus giving a realistic indication to the user of the effects of their inputs to the system.
3. A system as claimed in any preceding claim wherein the cases to which the system is adapted are virtual ones in which a healthcare services provider interacts with a virtual patient displayed on-screen.
4. A system as claimed in claim 3, wherein:
the virtual patient suffers from a predetermined ailment or condition initially unknown to the healthcare services provider,
the branch element or termini descriptors, which may be complete or incomplete as far the condition of the virtual patient is concerned, are provided successively by the system to the healthcare services practitioner as information which should be indicative of, or at least suggestive of the particular ailment or condition, and
the system provides one or more options to the healthcare services practitioner from which a selection of one or more options is made, said selection being returned to the system which thus causes the system to move to the next branch element or terminus in the decision tree, display the next descriptor associated therewith, and optionally to cause the appearance of the virtual patient to change if so demanded by the case and the previous user input.
5. A system as claimed in any preceding claim, further comprising means for providing feedback to a user based on pathways taken through the decision tree.
6. A system as claimed in claim 5, wherein the system is configured to present the feedback in audio and/or video format.
7. A system as claimed in claim 6, wherein the system is configured such that the feedback is provided in combination with an animation of the virtual human.
8. A system as claimed in any preceding claim, further comprising speech input and recognition means to enable user instructions to be input in spoken form.
9. A system as claimed in any preceding claim, wherein the descriptor elements include descriptors representative of human emotions such as happy, sad, angry, in pain.
10. A system as claimed in any preceding claim, further comprising a database of character templates configured to define at least one of an age, a sex, an ethnicity and a social background of the virtual human, and wherein the templates are configured to be applied so as to define an appearance of the virtual human independently of the appearance descriptor elements.
1 1. A method of data processing in which there is provided a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
i) a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen; and
ii) a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
wherein the appearance of the virtual human is changed by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
12. A method according to claim 1 1 , wherein the appearance of the virtual human is changed simultaneously with the change in position of the system within the decision tree, thus giving a realistic indication to the user of the effects of their inputs to the system.
13. A method according to claim 1 1 or 12, wherein the cases to which the method is applied are virtual ones in which a healthcare services provider interacts with a virtual patient displayed on-screen.
14. A method according to claim 13, wherein:
the virtual patient suffers from a predetermined ailment or condition initially unknown to the healthcare services provider,
the branch element or termini descriptors, which may be complete or incomplete as far the condition of the virtual patient is concerned, are provided successively to the healthcare services practitioner as information which should be indicative of, or at least suggestive of the particular ailment or condition, and
one or more options are presented to the healthcare services practitioner from which a selection of one or more options is made, said selection being returned to the system which thus causes the system to move to the next branch element or terminus in the decision tree, display the next descriptor associated therewith, and optionally to cause the appearance of the virtual patient to change if so demanded by the case and the previous user input.
15. A method according to any one of claims 1 1 to 14, wherein feedback is provided to a user based on pathways taken through the decision tree.
16. A method according to claim 15, wherein the feedback is presented in audio and/or video format.
17. A method according to claim 16, wherein the feedback is provided in combination with an animation of the virtual human.
18. A method according to any one of claims 1 1 to 17, wherein speech input and recognition means are provided to enable user instructions to be input in spoken form.
19. A method according to any one of claims 1 1 to 18, wherein the descriptor elements include descriptors representative of human emotions such as happy, sad, angry, in pain.
20. A method according to any one of claims 1 1 to 19, wherein there is provided a database of character templates configured to define at least one of an age, a sex, an ethnicity and a social background of the virtual human, and wherein the templates are applied so as to define a basic appearance of the virtual human independently of the appearance descriptor elements.
PCT/GB2007/050719 2007-11-27 2007-11-27 Virtual human interaction system WO2009068838A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2007361697A AU2007361697B2 (en) 2007-11-27 2007-11-27 Virtual human interaction system
PCT/GB2007/050719 WO2009068838A1 (en) 2007-11-27 2007-11-27 Virtual human interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GB2007/050719 WO2009068838A1 (en) 2007-11-27 2007-11-27 Virtual human interaction system

Publications (1)

Publication Number Publication Date
WO2009068838A1 true WO2009068838A1 (en) 2009-06-04

Family

ID=39267904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/050719 WO2009068838A1 (en) 2007-11-27 2007-11-27 Virtual human interaction system

Country Status (2)

Country Link
AU (1) AU2007361697B2 (en)
WO (1) WO2009068838A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762688A (en) * 2022-06-13 2023-03-07 人民卫生电子音像出版社有限公司 Super-simulation virtual standardized patient construction system and diagnosis method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6692258B1 (en) * 2000-06-26 2004-02-17 Medical Learning Company, Inc. Patient simulator
US20040064298A1 (en) * 2002-09-26 2004-04-01 Robert Levine Medical instruction using a virtual patient
US20040121295A1 (en) * 2002-12-20 2004-06-24 Steven Stuart Method, system, and program for using a virtual environment to provide information on using a product
WO2005055011A2 (en) * 2003-11-29 2005-06-16 American Board Of Family Medicine, Inc. Computer architecture and process of user evaluation
US6972775B1 (en) * 1999-11-01 2005-12-06 Medical Learning Company, Inc. Morphing patient features using an offset

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6972775B1 (en) * 1999-11-01 2005-12-06 Medical Learning Company, Inc. Morphing patient features using an offset
US6692258B1 (en) * 2000-06-26 2004-02-17 Medical Learning Company, Inc. Patient simulator
US20040064298A1 (en) * 2002-09-26 2004-04-01 Robert Levine Medical instruction using a virtual patient
US20040121295A1 (en) * 2002-12-20 2004-06-24 Steven Stuart Method, system, and program for using a virtual environment to provide information on using a product
WO2005055011A2 (en) * 2003-11-29 2005-06-16 American Board Of Family Medicine, Inc. Computer architecture and process of user evaluation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115762688A (en) * 2022-06-13 2023-03-07 人民卫生电子音像出版社有限公司 Super-simulation virtual standardized patient construction system and diagnosis method

Also Published As

Publication number Publication date
AU2007361697A1 (en) 2009-06-04
AU2007361697B2 (en) 2013-03-21

Similar Documents

Publication Publication Date Title
EP1879142A2 (en) Virtual human interaction system
Fealy et al. The integration of immersive virtual reality in tertiary nursing and midwifery education: A scoping review
Lawrence et al. A REDCap-based model for electronic consent (eConsent): moving toward a more personalized consent
Saba Nursing informatics: yesterday, today and tomorrow
US8297983B2 (en) Multimodal ultrasound training system
Kenny et al. Virtual humans for assisted health care
Damar What the literature on medicine, nursing, public health, midwifery, and dentistry reveals: An overview of the rapidly approaching metaverse
Meuschke et al. Narrative medical visualization to communicate disease data
Morrow et al. A multidisciplinary approach to designing and evaluating electronic medical record portal messages that support patient self-care
Meskó The impact of multimodal large language models on health care’s future
Hah et al. How clinicians perceive artificial intelligence–assisted technologies in diagnostic decision making: Mixed methods approach
Zhou et al. Virtual reality as a reflection technique for public speaking training
Kaczmarczyk Computers and society: computing for good
Hamm et al. Enabling older adults to carry out paperless falls-risk self-assessments using guidetomeasure-3D: A mixed methods study
Roma et al. Medical device usability: literature review, current status, and challenges
Pillay et al. The power struggle: Exploring the reality of clinical reasoning
AU2007361697B2 (en) Virtual human interaction system
AU2013206341A1 (en) Virtual human interaction system
Joekes 82 Breaking Bad News
Birns et al. Development of a novel multimedia e-learning tool for teaching the symptoms and signs of stroke
Foukarakis et al. Quality Assessment of Virtual Human Assistants for Elder Users
Roughley et al. Cystic Fibrosis: A Pocket Guide
Chaudhry et al. Human-Computer User Interface Design for Semiliterate and Illiterate Users
US20230334763A1 (en) Creating composite drawings using natural language understanding
US20220093253A1 (en) Mental health platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07824929

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007361697

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 2007361697

Country of ref document: AU

Date of ref document: 20071127

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 07824929

Country of ref document: EP

Kind code of ref document: A1