WO2009068838A1 - Virtual human interaction system - Google Patents
Virtual human interaction system Download PDFInfo
- Publication number
- WO2009068838A1 WO2009068838A1 PCT/GB2007/050719 GB2007050719W WO2009068838A1 WO 2009068838 A1 WO2009068838 A1 WO 2009068838A1 GB 2007050719 W GB2007050719 W GB 2007050719W WO 2009068838 A1 WO2009068838 A1 WO 2009068838A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- appearance
- virtual human
- patient
- human
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
Definitions
- This invention relates to a virtual human interaction system, and more particularly to a virtual human interaction system capable of being provided over local or disparate computer networks to users at one or more terminals and whereat users are presented with a situation involving one or more humans, which are virtually represented onscreen at said terminal, and with which said users must interact by providing one or more inputs at the terminal.
- this invention relates to a virtual patient system ideally intended for trainee medical practitioners to help them to learn or enhance their diagnostic skills based on a simulated doctor/patient scenario which is virtually represented on screen.
- the following description relates almost exclusively to the use of the invention in the medical industry for the purpose specified, the reader will instantly become aware that the invention has a potentially much wider application in the training and education fields generally, and therefore the invention should be considered as encompassing such applications.
- Virtual education and/or training systems which involve some type of background computer program coupled with images and/or video files (e.g. mpeg, avi and the like) for display on-screen are well established.
- such systems can be provided both locally, in terms of being provided and loaded on an individual, stand-alone, non- networked PC, and in distributed fashion, whereby the system is stored centrally and delivered either physically in terms of being downloadable to suitably networked PCs, or virtually in terms of the program being executable at the server side and the results of the execution (which is to some extent controlled by the user input at the networked PC) are then transmitted by HTML or other suitable format so that the display on the user's PC can be caused to change as program execution continues.
- HTML or other suitable format so that the display on the user's PC can be caused to change as program execution continues.
- doctor and patient may be represented by actors, and in the case of systems where video footage is provided to users, such actors would be previously instructed how to behave during filming according to the particular notional plight of the patient, e.g. the actor playing the patient is told to limp as a result of having a notional sprained ankle.
- This system is typical of many available on the web, in that a student is presented with a patient case to read, optionally provided with some patient medical history or medical records, and is then presented with a number of related options.
- Such systems are fundamentally limited in that they can relate only to one possible situation. For example, the user will be presented with a case to which the photos or video footage used are exclusively appropriate. Additionally, the text used in describing the case is most likely to be hard-coded into the website being provided, with the result that a total re-design is required if such systems are to be useful in training users in other situations.
- VHI Interface
- the system features realtime photo-realistic digital replicas of multiple individuals capable of talking, acting and showing emotions and over 60 different facial expressions. These "virtual patients" appear in a high-performance virtual reality environment featuring full panoramic backgrounds, animated 3D objects, behaviour and A.I. models, a complete vision system for supporting interaction and advanced animation interfaces.
- the VHI takes advantage of the latest advances in computer graphics. As such, it allows medical researchers and practitioners to create real-time responsive virtual humans for their experiments using computer systems priced under $2000.
- the virtual patients can talk, act and express a wide range of facial expressions, emotions, and body gestures. Their motions and actions can be imported from MPEG4 or motion capture files or animated.
- An additional scripting layer allows researchers to use their own scripting controls implemented in XML, HTML, LUA, TCL/TK or TCP/IP.
- This system developed by the University of Huddersfield, UK, creates a "virtual hospital" in HTML and other code, and is a computer-based learning tool for health care professionals that simulates the care context for patients within the environmental context of a General Hospital.
- the system has been built from components that reflect typical aspects of a care environment e.g. patients, patient assessment forms, observations records etc.
- the components provide the facilitator with the means to support complex and imaginative patient based scenarios to satisfy key learning outcomes.
- the system is innovative in design and delivery of course materials as it encourages students to engage with nursing matter through vignettes and patient based scenarios within the context of application to patient care; and allows students to explore wider issues relating to management of the care environment through duty roster and resource management and exploration of evidence based practice.
- a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
- a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen
- a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
- system causes the appearance of the virtual human to change by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
- a method of data processing in which there is provided a virtual human interaction system for use on a user terminal, said system being adapted for a plurality of cases to which a number of possible outcomes can be achieved depending on the user input to the system at various stages through its delivery, each case consisting of at least a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, each branch element and terminus including descriptors of a particular condition of the virtual human at that time in the specific case, said system comprising at least
- a virtual human representation element capable of being displayed graphically to appear as a virtual human on screen
- a plurality of appearance descriptor elements capable of interacting with the virtual human representation element so as to cause a change in the appearance thereof, said appearance descriptor elements being based on real- life human conditions and/or published evidence which affect the physical appearance of humans generally and which are thus mimicked in the virtual human;
- the appearance of the virtual human is changed by applying one or more of said appearance descriptor elements to said virtual human representation element when the system is caused to be at one the branch elements or termini as a result of user input at said terminal.
- Animations are invoked at a code-level based on the interaction with the user.
- This results in an asynchronous dialogue between the user and virtual human in the examples given below, reference will be made to "virtual patient”, but it will be understood that embodiments of the invention may apply to situations other than healthcare applications), which allows the tool to be used without the need for a second human operator.
- Embodiments of the present invention allow the interchanging of patients for a specific case, for example by changing the sex, ethnicity, age or apparent social background of the virtual patient.
- embodiments of the present invention differ in their ability to convert real- world experience and published evidence into a machine readable format. It is this format that invokes the suitable animation and audio from the virtual patient, based on the user's interaction.
- the research basis for this design is fundamentally founded on the development of decision analysis technology, which allows this real-world information to be stored in an efficient manner.
- embodiments of the present invention add an additional step by adding to the process the ability to convert real-world information into a machine readable format, by way of decision analysis techniques.
- a decision engine may be provided so as to parse a data file (e.g. in human-readable XML format or other human-readable format) and to convert this into a machine-readable format using decision analysis techniques.
- embodiments of the present invention differ in a third way, by their ability to simulate the evidence for a large cross-population of experts.
- the collective point of view, or individual points of view, of a large population of experts may be used as data for decision analysis and outcomes of certain decisions made by the user. This has the added benefit providing the user with tailored feedback based on decisions taken in line with the evidence and experience related to a virtual patient's case.
- a fundamental advantage of this invention is its ability to identify starkly, through the appearance of the on-screen virtual patient, exactly what the effect on such a patient would have been in real-life had the user acted in the way he did during the virtual case study provided by the system. For example, if case offered the option to the user of prescribing various drugs to treat the virtual patient's condition, and the user chose the wrong drug, the system could, almost in real-time, display the (possibly fatal) effects of incorrect prescription. For instance, a set of descriptors (possibly code fragments, mini- applications, or other graphics tools) could be applied to the virtual patient to cause the displayed figure to faint, vomit, turn different shades of colours, become blotchy, sweat, become feverish, collapse, and possibly ultimately, die.
- descriptors possibly code fragments, mini- applications, or other graphics tools
- descriptors may be configured to simulate embarrassment, pain, relief, happiness, sadness, anger etc. in response to particular questions or classes of questions raised by, or particular actions taken by, the user when interacting with the system.
- the virtual patient preferably takes the form (to the user) of an animated avatar, preferably rendered so as to have a three dimensional appearance (albeit, with current technology, on a two dimensional display).
- Embodiments of the present invention also allow for the provision of feedback from the virtual patient based on the routes taken through the decision tree. This allows users to receive advice on how their decision path differed from published evidence or peers (for example) via a range of feedback tools.
- this feature is implemented in various ways, ranging from a text transcript of the decision path to the virtual patient 'speaking' to the user at the conclusion of the virtual patient consultation and providing a critique of the user's performance.
- feedback may include a presentation of data, by the virtual patient, that was not elicited from the system by the user during interaction with the virtual patient.
- the virtual patient may (after the consultation) present an explanation as to what the correct path though the decision tree should have been, and why.
- a student or other user may interact with the system in a variety of ways, depending on the platform chosen for development of a virtual patient case.
- Web-based cases can use multiple-choice questions or textual analysis of free text inputted by the student or user.
- commercial speech recognition software may be employed to allow voice interaction with the virtual patient.
- speech or voice recognition and processing capability which in itself is known, and which will therefore not be described in full detail
- the system may accordingly include a microphone or the like and speech recognition processor for input of voice commands and questions.
- FIGURE 1 provides a diagrammatic representation of the system as a whole
- FIGURE 2 shows a possible decision tree structure suitable for a case involving a patient who is an asthma sufferer.
- the first step in designing a case is Patient Selection.
- designing a new case for use with the system according to the invention it needs to focus on a single patient. Different patients can be used for individual cases, and information on other people (e.g. family members) can be provided in the branch element/terminus descriptors if relevant.
- the system requires information about the patient such as their description (gender, age, height, weight etc.), previous medical history and any social history. It is perceived by the applicant herefor that after a number of cases have been designed, the system may be extended to develop a 'patient population' - a small set of patients that can be perceived as members of a virtual community. Such a resource would allow case designers to select a patient from the patient population or examine the effect of their decision across the entire virtual patient population.
- a particular case is created by designing a number of scenarios that are linked by the decisions that can be taken.
- This relates to the decision tree aspect of the invention, which may most usefully be mapped out in a flowchart or organisational chart, such as that shown in Figure 2.
- Each of the boxes can be thought of as branch elements (i.e. elements from which a branch extends or is possible) or termini (i.e. from which no further branch is possible).
- branch elements i.e. elements from which a branch extends or is possible
- termini i.e. from which no further branch is possible.
- each box there is provided some text indicative of the patients physical state at that stage in the diagnosis procedure.
- Also provided in each branch element are a series of options or other means by which a user can enter information or make a selection. This user input is then analysed by the system to allow it to determine, according to the decision tree, which branch element to display next.
- a case is made up of many scenarios which need to be described individually to support the decisions that can be taken. To begin writing a case, it is necessary to consider the following pieces of information for each scenario:
- Audience The type of student for whom the case is designed (e.g. Pharmacy, Medical, Nursing students). As a case has many scenarios, the audience does not always have to be the same for each scenario. For example by changing the audience, it is possible to design a case to allow a group of students from various health disciplines to work together on a single case.
- the system can easily be adapted to provide additional information in the form of attachments to the user, so word documents, http links, specific pictures and the like can be included, and the system can refer the student to support their decision for each scenario.
- each decision may also be necessary to categorise each decision into three types. If the system is used for a healthcare application, for example, all decisions may typically be broadly categorised as (a) treating the patient, (b) referring the patient to another healthcare professional or (c) giving advice to the patient.
- Multimedia With each scenario, it is possible (although not mandatory) to request visualisation of one or more key points of the scenario through virtual patient technologies incorporated into the system. This feature may invoke an animation based on the user's interaction with the system, and can therefore request a response from the virtual patient based on what has been said.
- Anne has suffered with asthma since childhood, suffering 3 exacerbations in the past 12 months. She had a total hysterectomy at age 48, menorrhagia & prolapse FH of CVD. Her mother died following a stroke at age 76, prior to this she had a succession of TIAs and had moved in with Anne and her husband. Husband worked in management for the local coal board and was retired on grounds of ill health (arthritis) in 1996 (age 62).
- Anne buys analgesics regularly from the local pharmacy for her husband (Co-Codamol 8/500) as he doesn't like to bother the GP for such 'minor' medications. Anne doesn't have any help at home, she does her own cooking and cleaning and when her asthma is ok her mobility and exercise tolerance is good. She was advised to increase her activity levels a few years ago and has started to walk the dog more since her husband is becoming increasingly unable to walk long distances without significant pain.
- the system is designed as follows.
- the system 2 consists of three main components to deliver the core functionality. These are referred to as:
- the Patient Case File 4 - this is an XML based file that drives the content in each case.
- the file format can be generated with supporting applications to allow case designers with little, or no technical knowledge to create new cases for use with the system.
- This file uses a XML definition to allow the decision engine to parse the file and process its contents.
- the files employ a decision tree to traverse the various scenarios a patient case may have, depending on the decisions taken within a case.
- the Decision Engine 6 this is responsible for parsing the Patient Case File and rendering the content into a machine readable format.
- the decision engine 6 is also responsible for calling external resources 8 that the case may need to render the case (e.g. documents, images, animations/sound files) and then formats the case back to the user via a standard output format (e.g. web page).
- the external resources 8 also include the descriptors which can be applied to the virtual patient, the computer-readable representation of which is similarly retained in the database.
- the engine also tracks the decisions taken by a user in each case and then passes this data onto a database 10 for recording. This information is then used when a user wishes to examine a transcript of what decisions the user made for a specific case.
- the Database 10 the database is responsible for tracking decisions taken within each case (and ultimately to deliver feedback to the user, where the feedback functionality is provided) and to keep a record of the location of external resources that may be required to render a case (e.g. animation files).
- the database is also referred to when a user wishes to recall their decisions within a case. This information is also used at a higher level, so that case designers can examine what type of decisions are being made in their case and if additional supporting information needs to be supplied to the user to improve the decision making process.
- information is declared in the XML file as a series of special XML tags.
- Each scenario is then declared via series of scenario tags that describe what is happening to the patient at this stage of the case. Typically, one would expect to see a series of scenario tags to make up the various scenarios of each case.
- the decisions are mapped to paths within the decision tree to allow the case to traverse the tree correctly.
- Each scenario is made anonymous by an identification (ID) value and referenced in the XML file thus:
- a tag is also included in the XML file which calls an external multimedia resource, and in particular an emotional or physical descriptor file which can be applied to a default virtual human (e.g. avatar) in memory, in accordance with embodiments of the invention.
- This may be an image file, sound file or an animation to cause the avatar to respond in a predefined way.
- Such animation files need to be designed before the XML file can reference them.
- animations are designed and can be invoked at a code level and applied to different patients. Therefore it is possible for the invention to call on a database of animations (using a combination of external and in-house developed multimedia resources) to invoke an emotion in the patient across a number of cases.
- a virtual human interaction system for use on a PC or web enabled computer, which facilitates the training and education of users. Its initial application is directed to healthcare practitioners such as doctors, nurses, pharmacists and the like by allowing them to virtually interact with a virtual patient delivered by the system and displayed on a computer screen, although other applications outside the healthcare field may be envisaged.
- the system embodies a plurality of cases, and for each case, there are a number of possible outcomes, depending on the choices made by the healthcare services practitioner at each stage in a particular case.
- each case consists of a decision tree element consisting of branch elements from which the decision tree may branch in one or more directions toward further branch elements or tree termini, the user input cause the system to move through the decision tree of the case.
- Each branch element and terminus includes descriptors of a particular condition of the virtual human at that time in the specific case, and these are displayed to the user at each specific stage in the case to provide the user with a current indication of the well being of the virtual patient.
- the system causes the appearance of the virtual patient to change by applying one or more of said appearance descriptors to said virtual patient as the system moves through the decision tree in response to user input.
- the resulting effect is to provide users with an almost real-time indication of their actions on patients.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2007361697A AU2007361697B2 (en) | 2007-11-27 | 2007-11-27 | Virtual human interaction system |
PCT/GB2007/050719 WO2009068838A1 (en) | 2007-11-27 | 2007-11-27 | Virtual human interaction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/GB2007/050719 WO2009068838A1 (en) | 2007-11-27 | 2007-11-27 | Virtual human interaction system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009068838A1 true WO2009068838A1 (en) | 2009-06-04 |
Family
ID=39267904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2007/050719 WO2009068838A1 (en) | 2007-11-27 | 2007-11-27 | Virtual human interaction system |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2007361697B2 (en) |
WO (1) | WO2009068838A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115762688A (en) * | 2022-06-13 | 2023-03-07 | 人民卫生电子音像出版社有限公司 | Super-simulation virtual standardized patient construction system and diagnosis method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6692258B1 (en) * | 2000-06-26 | 2004-02-17 | Medical Learning Company, Inc. | Patient simulator |
US20040064298A1 (en) * | 2002-09-26 | 2004-04-01 | Robert Levine | Medical instruction using a virtual patient |
US20040121295A1 (en) * | 2002-12-20 | 2004-06-24 | Steven Stuart | Method, system, and program for using a virtual environment to provide information on using a product |
WO2005055011A2 (en) * | 2003-11-29 | 2005-06-16 | American Board Of Family Medicine, Inc. | Computer architecture and process of user evaluation |
US6972775B1 (en) * | 1999-11-01 | 2005-12-06 | Medical Learning Company, Inc. | Morphing patient features using an offset |
-
2007
- 2007-11-27 WO PCT/GB2007/050719 patent/WO2009068838A1/en active Application Filing
- 2007-11-27 AU AU2007361697A patent/AU2007361697B2/en not_active Ceased
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6972775B1 (en) * | 1999-11-01 | 2005-12-06 | Medical Learning Company, Inc. | Morphing patient features using an offset |
US6692258B1 (en) * | 2000-06-26 | 2004-02-17 | Medical Learning Company, Inc. | Patient simulator |
US20040064298A1 (en) * | 2002-09-26 | 2004-04-01 | Robert Levine | Medical instruction using a virtual patient |
US20040121295A1 (en) * | 2002-12-20 | 2004-06-24 | Steven Stuart | Method, system, and program for using a virtual environment to provide information on using a product |
WO2005055011A2 (en) * | 2003-11-29 | 2005-06-16 | American Board Of Family Medicine, Inc. | Computer architecture and process of user evaluation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115762688A (en) * | 2022-06-13 | 2023-03-07 | 人民卫生电子音像出版社有限公司 | Super-simulation virtual standardized patient construction system and diagnosis method |
Also Published As
Publication number | Publication date |
---|---|
AU2007361697A1 (en) | 2009-06-04 |
AU2007361697B2 (en) | 2013-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1879142A2 (en) | Virtual human interaction system | |
Fealy et al. | The integration of immersive virtual reality in tertiary nursing and midwifery education: A scoping review | |
Lawrence et al. | A REDCap-based model for electronic consent (eConsent): moving toward a more personalized consent | |
Saba | Nursing informatics: yesterday, today and tomorrow | |
US8297983B2 (en) | Multimodal ultrasound training system | |
Kenny et al. | Virtual humans for assisted health care | |
Damar | What the literature on medicine, nursing, public health, midwifery, and dentistry reveals: An overview of the rapidly approaching metaverse | |
Meuschke et al. | Narrative medical visualization to communicate disease data | |
Morrow et al. | A multidisciplinary approach to designing and evaluating electronic medical record portal messages that support patient self-care | |
Meskó | The impact of multimodal large language models on health care’s future | |
Hah et al. | How clinicians perceive artificial intelligence–assisted technologies in diagnostic decision making: Mixed methods approach | |
Zhou et al. | Virtual reality as a reflection technique for public speaking training | |
Kaczmarczyk | Computers and society: computing for good | |
Hamm et al. | Enabling older adults to carry out paperless falls-risk self-assessments using guidetomeasure-3D: A mixed methods study | |
Roma et al. | Medical device usability: literature review, current status, and challenges | |
Pillay et al. | The power struggle: Exploring the reality of clinical reasoning | |
AU2007361697B2 (en) | Virtual human interaction system | |
AU2013206341A1 (en) | Virtual human interaction system | |
Joekes | 82 Breaking Bad News | |
Birns et al. | Development of a novel multimedia e-learning tool for teaching the symptoms and signs of stroke | |
Foukarakis et al. | Quality Assessment of Virtual Human Assistants for Elder Users | |
Roughley et al. | Cystic Fibrosis: A Pocket Guide | |
Chaudhry et al. | Human-Computer User Interface Design for Semiliterate and Illiterate Users | |
US20230334763A1 (en) | Creating composite drawings using natural language understanding | |
US20220093253A1 (en) | Mental health platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07824929 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007361697 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2007361697 Country of ref document: AU Date of ref document: 20071127 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07824929 Country of ref document: EP Kind code of ref document: A1 |