WO2009155415A2 - Training and rehabilitation system, and associated method and computer program product - Google Patents

Training and rehabilitation system, and associated method and computer program product Download PDF

Info

Publication number
WO2009155415A2
WO2009155415A2 PCT/US2009/047790 US2009047790W WO2009155415A2 WO 2009155415 A2 WO2009155415 A2 WO 2009155415A2 US 2009047790 W US2009047790 W US 2009047790W WO 2009155415 A2 WO2009155415 A2 WO 2009155415A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
interest
specimen
human
computable
Prior art date
Application number
PCT/US2009/047790
Other languages
French (fr)
Other versions
WO2009155415A3 (en
Inventor
Diglio A. Simoni
Original Assignee
Research Triangle Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Triangle Institute filed Critical Research Triangle Institute
Publication of WO2009155415A2 publication Critical patent/WO2009155415A2/en
Publication of WO2009155415A3 publication Critical patent/WO2009155415A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Definitions

  • Embodiments of the present invention are generally directed to artificial intelligence systems and, more particularly, to training and rehabilitation systems and associated methods and computer program products.
  • IEDs improvised explosive devices
  • Visual search tasks are accomplished through a cycle of fixations and visual scene analysis interrupted by saccades.
  • a saccade produces a rapid shift of gaze, redirecting the fovea (a tiny pit located in the macula of the retina that is responsible for sharp central vision) onto a new point in the visual scene.
  • the visual system reacquires the new image, the visual scene is remapped onto primary visual cortex governed by the physical limits imposed by the retinal photoreceptor layout and the cortical magnification factor.
  • the physical makeup of our perception systems may be the same, we do not all perceive stimuli identically, nor do we process stimuli in the same manner so as to have the same psychophysical reactions.
  • Some observers may be more experienced or may be trained to analyze images efficiently and effectively.
  • the psychophysical characteristics of visual search tasks performed by experienced doctors searching tumor tissue images for potentially cancerous cells may differ greatly from the psychophysical visual search task characteristics of inexperienced doctors.
  • exemplary embodiments of the present invention provide an improvement over the known prior art by, among other things, providing a method of behavioral training, a method of rehabilitation, and associated systems and computer program products.
  • one exemplary aspect of the present invention provides a method of behavioral training that comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process.
  • the computable statements are incorporated into an expert system, and the expert system of the perception process then combined with a neural network corresponding to a perception model, so as to form a behavioral system.
  • a stimuli is introduced to the behavioral system and an exemplary response elicited therefrom.
  • An untrained specimen also introduced to the stimuli, is induced to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen.
  • Another exemplary aspect of the present invention provides a method of rehabilitation that comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process.
  • the computable statements are incorporated into an expert system, and the expert system of the perception process combined with a neural network corresponding to a perception model, so as to form a demonstration system.
  • a scenario is introduced to the demonstration system and an exemplary response elicited therefrom.
  • a debilitated specimen also introduced to the scenario, is then trained to mimic the exemplary response to thereby rehabilitate the debilitated specimen.
  • Fig. 1 shows an example of a search trial image used in an experiment where subjects were asked to search for a target object within a sample image
  • Fig. 2 shows the sample image of Fig. 1 , including the target object and a search path, as well as regions of interest;
  • Fig. 3 shows a graphical representation of the cortical magnification factor
  • Fig. 4 shows an example of a sample image search model developed by the inventors of the present application in accordance with one embodiment of the present invention
  • Fig. 5 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention.
  • Fig. 6 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the neural network component;
  • Fig. 7 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the fuzzy expert system component;
  • Fig. 8 shows a search trial image similar that shown in Fig. 1 , wherein an image is searched for the target object;
  • Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training
  • Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation
  • Fig. 11 shows a block diagram of an exemplary electronic device configured to execute a method and computer program product for behavioral training or for rehabilitation in accordance with an exemplary embodiment of the present invention.
  • visual search mechanisms can be examined using simple search paradigms. For example, a subject can be asked to search for a particular letter hidden among a group of distractors.
  • the subject's eyes can be tracked using various techniques (such as, for example, infrared eye tracking devices) that record the position of the subject's eyes on the display in real-time.
  • measures of performance can be obtained, including reaction time, and search statistics based on the position of the target with respect to the position of the eyes on the display.
  • Fig. 1 shows an example of a search trial image used in an experiment where a subject was asked to search for a target object 10 (in this case a red (shown in the figure as crossed section lining) rotated L hidden within a group of red (shown in the figure as crossed section lining) Ts and green (shown in the figure as diagonal section lining) Ls) within a sample image 20.
  • the white line indicates the search path 30 dictated by the position of the subject's eyes (as tracked by an infrared eye tracking device) as the subject performed the search trial.
  • Fig. 2 shows the sample image 20 of Fig. 1, including the target object 10 and the search path 30, as well as the regions of interest 40 determined by the areas of the image upon which the subject fixated.
  • the flowchart on the right of the drawing indicates a traditional image search model 50 used to describe the manner in which the human brain solves this type of search problem.
  • a subject first acquires the scene, which essentially indicates that the subject quickly examines the image 20 as whole so as to put the entire image 20 into context.
  • the subject selects a region of interest and, in block 80, attempts to identify the target 10 within (or near, through peripheral vision) the region of interest 40.
  • the search ends. If however, as shown in the drawing, the target 10 is not found, the procedure returns to block 70 and a new region of interest 40 is selected. This process continues until the target 10 is finally identified, at which point the searching stops.
  • the flowchart 50 may not properly model the true human response. For example, selection of a new fixation location is determined on a real-time basis depending on the current point of view, taking into account, for example, the retinocortical transformation of image space. This nonlinear transformation may induce certain constraints that naturally affect the way that regions of interest are selected for further processing.
  • a cortical magnification factor which results in an increased internal representation of image data at the fovea, implies that a larger amount of neural processing is expended in areas close the point of fixation, resulting in a natural division of work between the processes involved in target identification (which are done centrally) as opposed to those involved in the selection of a new fixation location (which are done in the periphery).
  • Fig. 3 shows a graphical representation of the cortical magnification factor.
  • the left side of the drawing shows fixation on an object 80 of an image 90 displayed at the center of a polar coordinate grid.
  • On the right side of the drawing is a representation of a model that represents the effect of cortical magnification.
  • the fovea is located at the left tip of the model.
  • the object 80 is processed by a larger number of neurons than the objects in the periphery.
  • Fig. 4 shows an example of a sample image search model 100 in accordance with one embodiment of the present invention.
  • this model differs from the traditional image search model 50 by providing a parallel framework that represents the division of work between central processes 110, which are involved in target identification, and peripheral processes 120, which are involved in the selection of a new fixation location (i.e., a new ROI).
  • the subject begins a new fixation.
  • the subject selects an ROI, while in block 150 the subject determines whether the target has been identified. If, however, in block 160, the target is identified, the process stops. If, however, in block 160, the target has not been identified, the process returns to block 130 where the subject begins a new fixation.
  • a perception-based model may be implemented, whereby the design principles of such a model may be derived from psychophysical observations of human performance during active visual search tasks via the use of, for example, a real-time infrared or other suitable eye-tracker.
  • Psychophysical experiments were used to obtain probabilistic measures of both stimulus and neuroanatomical features that constrain the human visual system's real-time election of image regions (ROIs) during the target discovery periods of active visual search.
  • fuzzy predicates i.e., computable statements
  • rule set for driving a model of human search performance (i.e., an expert system) that takes into account the intrinsic uncertainty of sensory processing.
  • Fig. 5 shows a more detailed view of the image perception-based search model 100.
  • block 140 includes a neural network component, block 170, and a fuzzy expert system (FES) component, block 180.
  • FES fuzzy expert system
  • the neural network component of the depicted embodiment calculates the saliency of the visual scene under scrutiny using a parallel computation composed of several feature map calculations at different spatial scales and combines them into a single map that describes the significance of each location on the input scene. In the case of searching an image for one or more items of interest, this results in a set of ROIs.
  • Fig. 6 shows a set of ROIs.
  • the fuzzy expert system component, block 180 is comprised of a knowledge base and a logical inference engine that applies the facts to the rule sets and produces appropriate decisions along with the "train of thought" that was used to arrive at a particular choice.
  • Types of rules may include, but need not be limited to, relation, recommendation, directive, strategy, heuristic, etc.
  • the present fuzzy expert system translates psychometric functions into reasonable sets of fuzzy sets that attempt to capture the essence of the measurement. Therefore the resulting perception-based model uses fuzzy logic for the transformation of psychophysical observations into computable statements that can be used to intelligently guide the selection process in realtime.
  • the resulting search model thus may be used to create a perception-based model that mimics the response of one or more trained humans.
  • the top left portion of Fig. 8 shows a search trial image 200 similar that shown in Fig. 1, wherein a human observer searches the image 200 for the target object 210.
  • an ideal search path 330 i.e., for a "trained" human
  • a random search path 430 are shown for respective images 300 and 400.
  • perception-based model relates to visual stimuli
  • similar models could be generated for other perceptions, including, but not limited to, auditory perceptions.
  • the perception-based models may be used in a variety of ways. For example, in some embodiments, they may serve as the basis for a behavioral system in order to provide a method of behavior training for untrained specimens. In other embodiments, they may serve as the basis for a method of rehabilitation in order to rehabilitate debilitated specimens.
  • Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training 600.
  • the method comprises training an untrained specimen to display the psychophysical reactions of a trained specimen.
  • Such a method first comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic (Block 610), the computable statements being associated with a perception process.
  • the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one image that may contain one or more items of interest. As noted above, such embodiments may be useful in the medical field where images of biological tissue may contain cancerous or otherwise defective and/or abnormal cells.
  • a specimen may be any human or machine specimen, however in some embodiments, the trained specimen may be a trained human and the untrained specimen may be an untrained human.
  • the psychophysical reactions of the trained human may comprise eye scan patterns wherein the perception process may comprise visual analysis of at least one image in search of at least one item of interest. The computable statements are then incorporated into an expert system (Block 620).
  • the expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a behavioral system (Block 630).
  • the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest therein.
  • stimuli may be introduced to the behavioral system so as to elicit an exemplary response (Block 640).
  • An untrained specimen, also introduced to the stimuli may then be induced to mimic the exemplary response of the behavioral system so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (Block 650).
  • an untrained human observer may be trained by preprocessing images that are known to have problematic regions of interest (such as, for example, images in which a trained human has located one or more items of interest) so that the untrained human observer is directed to analyzing these problem areas, thus displaying the psychophysical reactions of the trained human.
  • the untrained specimen may comprise an untrained artificial intelligence system whereby the artificial intelligence system is trained to mimic the response of an exemplary specimen.
  • the exemplary specimen may be a trained human, or, in other embodiments, the exemplary specimen may be a trained artificial intelligence system.
  • inducing the untrained artificial intelligence system in some embodiments may comprise programming the untrained artificial intelligence to yield the psychophysical reactions of the trained specimen.
  • Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation 700.
  • the method comprises rehabilitating a debilitated specimen to mimic the exemplary response of a demonstration system.
  • Such a method first comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic (Block 710), the computable statements being associated with a perception process.
  • the debilitated specimen may be any human or machine specimen, in some embodiments the debilitated specimen may be an impaired human observer.
  • the unimpaired specimen may be any human or machine specimen, in some embodiments the unimpaired specimen may be an unimpaired human observer.
  • the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one vehicle driving scenario.
  • the computable statements are then incorporated into an expert system (Block 720).
  • the expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a demonstration system (Block 730).
  • the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest.
  • the rehabilitation system may be used to rehabilitate human drivers that are or have become impaired.
  • a block diagram of an exemplary electronic device 800 (e.g., mainframe, PC, laptop, PDA, etc.) is shown that is configured to execute a method and computer program product for behavioral training or for rehabilitation.
  • the electronic device may include various modules for performing one or more functions in accordance with exemplary embodiments of the present invention, including those more particularly shown and described herein, wherein such modules may comprise hardware, software, or a combination thereof. It should be understood, however, that the electronic device may include alternative configurations for performing one or more like functions, without departing from the spirit and scope of the present invention.
  • the electronic device may generally include components, such as a processor, controller, or the like 802 connected to a memory 804, for performing or controlling the various functions of the disclosure.
  • the memory can comprise volatile and/or non-volatile memory, and typically stores content, data or the like.
  • the memory typically stores content transmitted from, and/or received by, the electronic device.
  • the memory typically stores software applications, instructions or the like for the processor to perform steps associated with operation of the electronic device in accordance with embodiments of the present invention.
  • the memory 804 may store computer program code for an application and other computer programs.
  • the memory may store computer program code for, among other things, transforming psychophysical reactions of an exemplary (or unimpaired) specimen into computable statements using fuzzy logic, the computable statements being associated with a perception process; incorporating the computable statements into an expert system; combining the expert system of the perception process with a neural network corresponding to a perception model, so as to form a behavioral (or demonstration) system; and introducing a stimuli (or scenario) to the behavioral (or demonstration) system and eliciting an exemplary response therefrom, and inducing an untrained (or debilitated) specimen, also introduced to the stimuli (or scenario), to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (or to rehabilitate the debilitated specimen).
  • the processor 802 can also be connected to at least one interface or other component(s) for displaying, transmitting and/or receiving data, content or the like.
  • the interface(s) can include at least one communication interface 806 or other component(s) for transmitting and/or receiving data, content or the like.
  • the communication interface may provide for communications with an input device 804.
  • the communication interface may also include at least one user interface that can include a display 808 and/or a user input interface 810.
  • the user input interface in turn, can comprise any of a number of devices allowing the electronic device to receive data from a user, such as a keypad, a touch display, a joystick or other input device.
  • embodiments of the present invention may be configured as a method. Accordingly, embodiments of the present invention may be comprised of various configurations, including entirely of hardware, entirely of software, or any combination of software and hardware. Furthermore, embodiments of the present invention may take the form of a computer program product consisting of a computer-readable storage medium (e.g., the memory 804 of Fig. 3) and computer-readable program instructions stored in the storage medium. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • a computer-readable storage medium e.g., the memory 804 of Fig. 3
  • Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
  • These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks.
  • the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
  • blocks of the block diagrams and flowchart illustrations support combinations of components for performing the specified functions, combinations of steps for performing the specified functions and program instructions for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Optimization (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computational Linguistics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

A method of behavioral training is provided, wherein psychophysical reactions of an exemplary specimen are transformed into computable statements, associated with a perception process, using fuzzy logic. The computable statements are incorporated into an expert system, and the expert system of the perception process combined with a neural network corresponding to a perception model, so as to form a behavioral system. A stimuli is introduced to the behavioral system and an exemplary response elicited therefrom. An untrained specimen, also introduced to the stimuli, is induced to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen. A rehabilitation method, as well as associated methods, systems, and computer program products, are also provided.

Description

TRAINING AND REHABILITATION SYSTEM, AND ASSOCIATED METHOD AND COMPUTER PROGRAM PRODUCT
BACKGROUND OF THE INVENTION Field of the Invention
Embodiments of the present invention are generally directed to artificial intelligence systems and, more particularly, to training and rehabilitation systems and associated methods and computer program products.
Description of Related Art
Our comprehension of our environment is based to a very large extent on our perceptions, including, but not limited to, perceptions of visual, auditory, tactile, olfactory, and taste stimuli. In many instances, these stimuli are perceived and analyzed casually. However, in other instances, analysis of these stimuli may be critical to the health, wealth, and/or overall well-being of ourselves or of other persons, businesses, or entities.
For example, many medical specialists use existing systems that store biological images obtained with very high spatiotemporal resolution for medical research. One particular system involves the visual analysis of large image repositories containing slices of tumor tissues in search of potentially cancerous tumor cells. Other examples include, but are not limited to: military specialists who analyze images for the purposes of automatic target recognition, submarine specialists who listen to and analyze sounds emanating from the acoustic space surrounding their vessel, air travel security officers who analyze x-ray images for a variety of contraband, quality control specialists who analyze finished products, processes, and/or components for defects, and military specialists who identify threat conditions, such as the existence of improvised explosive devices (IEDs).
Common among the analysis of image stimuli is a search for items of importance via a series of visual search tasks. Many of us go about our daily routines making effective use of our visual abilities without realizing the computational complexity of those tasks. Visual search tasks are accomplished through a cycle of fixations and visual scene analysis interrupted by saccades.
A saccade produces a rapid shift of gaze, redirecting the fovea (a tiny pit located in the macula of the retina that is responsible for sharp central vision) onto a new point in the visual scene. As the visual system reacquires the new image, the visual scene is remapped onto primary visual cortex governed by the physical limits imposed by the retinal photoreceptor layout and the cortical magnification factor. Although the physical makeup of our perception systems may be the same, we do not all perceive stimuli identically, nor do we process stimuli in the same manner so as to have the same psychophysical reactions. Some observers may be more experienced or may be trained to analyze images efficiently and effectively. For example, the psychophysical characteristics of visual search tasks performed by experienced doctors searching tumor tissue images for potentially cancerous cells may differ greatly from the psychophysical visual search task characteristics of inexperienced doctors.
Rapid advances in high-performance computing, processing, and sensor technology have resulted in an exponential growth in the amount of stimuli (such as, for example, digital data) that is available for analysis. While the amount of stimuli continues to mount, it is apparent that manual analysis of certain stimuli by inexperienced subjects will become less practical. As a result, there is a need for a system, method and computer program product designed to model the efficient analysis of stimuli. Such a system, method and computer program product should be designed to mimic the psychophysical reactions of trained subjects such that untrained subjects may be trained to follow the reactions. Additionally, such a system, method and computer program product should be designed to rehabilitate a debilitated subject using model psychophysical reactions.
BRIEF SUMMARY OF VARIOUS EMBODIMENTS In general, exemplary embodiments of the present invention provide an improvement over the known prior art by, among other things, providing a method of behavioral training, a method of rehabilitation, and associated systems and computer program products.
In particular, one exemplary aspect of the present invention provides a method of behavioral training that comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process. The computable statements are incorporated into an expert system, and the expert system of the perception process then combined with a neural network corresponding to a perception model, so as to form a behavioral system. A stimuli is introduced to the behavioral system and an exemplary response elicited therefrom. An untrained specimen, also introduced to the stimuli, is induced to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen.
Another exemplary aspect of the present invention provides a method of rehabilitation that comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic, wherein the computable statements are associated with a perception process. The computable statements are incorporated into an expert system, and the expert system of the perception process combined with a neural network corresponding to a perception model, so as to form a demonstration system. A scenario is introduced to the demonstration system and an exemplary response elicited therefrom. A debilitated specimen, also introduced to the scenario, is then trained to mimic the exemplary response to thereby rehabilitate the debilitated specimen. As such, aspects of the present invention provide significant advantages as otherwise detailed herein.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S) Having thus described the invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
Fig. 1 shows an example of a search trial image used in an experiment where subjects were asked to search for a target object within a sample image;
Fig. 2 shows the sample image of Fig. 1 , including the target object and a search path, as well as regions of interest;
Fig. 3 shows a graphical representation of the cortical magnification factor; Fig. 4 shows an example of a sample image search model developed by the inventors of the present application in accordance with one embodiment of the present invention;
Fig. 5 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention;
Fig. 6 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the neural network component;
Fig. 7 shows a detailed view of an image perception-based search model in accordance with another embodiment of the present invention, focusing on the fuzzy expert system component;
Fig. 8 shows a search trial image similar that shown in Fig. 1 , wherein an image is searched for the target object;
Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training; Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation; and
Fig. 11 shows a block diagram of an exemplary electronic device configured to execute a method and computer program product for behavioral training or for rehabilitation in accordance with an exemplary embodiment of the present invention.
DETAILED DESCRIPTION
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. Indeed, this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like numbers refer to like elements throughout.
Our attention, whether through sight, sound, touch, etc. is naturally drawn to things that stand out in contrast to their environment. In terms of our visual perception process, objects that have contrasting orientations, motions, colors, luminances, etc. are easily noticed. For example, our attention is quickly focused on a red ball surrounded by a group of white balls, a moving object among a group of non-moving objects, or the brightest star in the night's sky. An object that does not differ greatly from its surroundings, however, does not easily attract our attention. In situations where we are visually searching for a target object that may be similar to, or does not significantly contrast its surroundings, our visual perception process tends to break the visual image into a series of smaller portions, or so-called regions of interest (ROIs).
In the laboratory, visual search mechanisms can be examined using simple search paradigms. For example, a subject can be asked to search for a particular letter hidden among a group of distractors. The subject's eyes can be tracked using various techniques (such as, for example, infrared eye tracking devices) that record the position of the subject's eyes on the display in real-time. Several measures of performance can be obtained, including reaction time, and search statistics based on the position of the target with respect to the position of the eyes on the display.
For example, Fig. 1 shows an example of a search trial image used in an experiment where a subject was asked to search for a target object 10 (in this case a red (shown in the figure as crossed section lining) rotated L hidden within a group of red (shown in the figure as crossed section lining) Ts and green (shown in the figure as diagonal section lining) Ls) within a sample image 20. The white line indicates the search path 30 dictated by the position of the subject's eyes (as tracked by an infrared eye tracking device) as the subject performed the search trial. Fig. 2 shows the sample image 20 of Fig. 1, including the target object 10 and the search path 30, as well as the regions of interest 40 determined by the areas of the image upon which the subject fixated. The flowchart on the right of the drawing indicates a traditional image search model 50 used to describe the manner in which the human brain solves this type of search problem. In the first block 60 of the traditional image search model 50, a subject first acquires the scene, which essentially indicates that the subject quickly examines the image 20 as whole so as to put the entire image 20 into context. In the next block 70, the subject selects a region of interest and, in block 80, attempts to identify the target 10 within (or near, through peripheral vision) the region of interest 40. In block 90, if the target 10 is found, the search ends. If however, as shown in the drawing, the target 10 is not found, the procedure returns to block 70 and a new region of interest 40 is selected. This process continues until the target 10 is finally identified, at which point the searching stops.
However, in some instances, the flowchart 50 may not properly model the true human response. For example, selection of a new fixation location is determined on a real-time basis depending on the current point of view, taking into account, for example, the retinocortical transformation of image space. This nonlinear transformation may induce certain constraints that naturally affect the way that regions of interest are selected for further processing. In particular, a cortical magnification factor, which results in an increased internal representation of image data at the fovea, implies that a larger amount of neural processing is expended in areas close the point of fixation, resulting in a natural division of work between the processes involved in target identification (which are done centrally) as opposed to those involved in the selection of a new fixation location (which are done in the periphery). Fig. 3 shows a graphical representation of the cortical magnification factor. The left side of the drawing shows fixation on an object 80 of an image 90 displayed at the center of a polar coordinate grid. On the right side of the drawing is a representation of a model that represents the effect of cortical magnification. The fovea is located at the left tip of the model. As represented in the drawing, the object 80 is processed by a larger number of neurons than the objects in the periphery.
Fig. 4 shows an example of a sample image search model 100 in accordance with one embodiment of the present invention. As noted above, in general this model differs from the traditional image search model 50 by providing a parallel framework that represents the division of work between central processes 110, which are involved in target identification, and peripheral processes 120, which are involved in the selection of a new fixation location (i.e., a new ROI). In general, in block 130, the subject begins a new fixation. In block 140, the subject selects an ROI, while in block 150 the subject determines whether the target has been identified. If, however, in block 160, the target is identified, the process stops. If, however, in block 160, the target has not been identified, the process returns to block 130 where the subject begins a new fixation.
As described above, the position of the eyes of a subject who is searching for a target object within an image can be tracked. As such, a perception-based model may be implemented, whereby the design principles of such a model may be derived from psychophysical observations of human performance during active visual search tasks via the use of, for example, a real-time infrared or other suitable eye-tracker. Psychophysical experiments were used to obtain probabilistic measures of both stimulus and neuroanatomical features that constrain the human visual system's real-time election of image regions (ROIs) during the target discovery periods of active visual search. Further, mathematical precisation tools were used to recast the psychophysical metrics as fuzzy predicates (i.e., computable statements) in order to develop a rule set for driving a model of human search performance (i.e., an expert system) that takes into account the intrinsic uncertainty of sensory processing.
Fig. 5 shows a more detailed view of the image perception-based search model 100. Note that block 140 includes a neural network component, block 170, and a fuzzy expert system (FES) component, block 180. As shown in Fig. 6, the neural network component of the depicted embodiment calculates the saliency of the visual scene under scrutiny using a parallel computation composed of several feature map calculations at different spatial scales and combines them into a single map that describes the significance of each location on the input scene. In the case of searching an image for one or more items of interest, this results in a set of ROIs. As shown in Fig. 7, the fuzzy expert system component, block 180, is comprised of a knowledge base and a logical inference engine that applies the facts to the rule sets and produces appropriate decisions along with the "train of thought" that was used to arrive at a particular choice. Types of rules may include, but need not be limited to, relation, recommendation, directive, strategy, heuristic, etc. Thus the present fuzzy expert system translates psychometric functions into reasonable sets of fuzzy sets that attempt to capture the essence of the measurement. Therefore the resulting perception-based model uses fuzzy logic for the transformation of psychophysical observations into computable statements that can be used to intelligently guide the selection process in realtime.
The resulting search model thus may be used to create a perception-based model that mimics the response of one or more trained humans. For example, the top left portion of Fig. 8 shows a search trial image 200 similar that shown in Fig. 1, wherein a human observer searches the image 200 for the target object 210. A search path 530 of the perception-based model generated, in part, wherein the model searches an image 500 for a target object 510, is shown in the bottom right portion of the figure. For comparison purposes, an ideal search path 330 (i.e., for a "trained" human) and a random search path 430 are shown for respective images 300 and 400. It should be noted that although the perception-based model described above relates to visual stimuli, similar models could be generated for other perceptions, including, but not limited to, auditory perceptions. Additionally, the perception-based models may be used in a variety of ways. For example, in some embodiments, they may serve as the basis for a behavioral system in order to provide a method of behavior training for untrained specimens. In other embodiments, they may serve as the basis for a method of rehabilitation in order to rehabilitate debilitated specimens.
Fig. 9 schematically illustrates one exemplary embodiment of the present invention, directed to a method of behavioral training 600. In general, the method comprises training an untrained specimen to display the psychophysical reactions of a trained specimen. Such a method first comprises transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic (Block 610), the computable statements being associated with a perception process. Although the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one image that may contain one or more items of interest. As noted above, such embodiments may be useful in the medical field where images of biological tissue may contain cancerous or otherwise defective and/or abnormal cells. Other embodiments may include, but need not be limited to, embodiments wherein images are analyzed that may indicate the presence of one or more IEDs, embodiments wherein x-ray images are analyzed that may indicate contraband, and embodiments wherein a product or component is analyzed in a quality control environment for the indication of defects. Additionally, in various embodiments a specimen may be any human or machine specimen, however in some embodiments, the trained specimen may be a trained human and the untrained specimen may be an untrained human. In some embodiments, the psychophysical reactions of the trained human may comprise eye scan patterns wherein the perception process may comprise visual analysis of at least one image in search of at least one item of interest. The computable statements are then incorporated into an expert system (Block 620). The expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a behavioral system (Block 630). In various embodiments, the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest therein. Once a behavioral system is available, stimuli may be introduced to the behavioral system so as to elicit an exemplary response (Block 640). An untrained specimen, also introduced to the stimuli, may then be induced to mimic the exemplary response of the behavioral system so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (Block 650). In some embodiments, an untrained human observer may be trained by preprocessing images that are known to have problematic regions of interest (such as, for example, images in which a trained human has located one or more items of interest) so that the untrained human observer is directed to analyzing these problem areas, thus displaying the psychophysical reactions of the trained human. It should be noted that in other embodiments, the untrained specimen may comprise an untrained artificial intelligence system whereby the artificial intelligence system is trained to mimic the response of an exemplary specimen. In some embodiments, the exemplary specimen may be a trained human, or, in other embodiments, the exemplary specimen may be a trained artificial intelligence system. In any event, inducing the untrained artificial intelligence system in some embodiments may comprise programming the untrained artificial intelligence to yield the psychophysical reactions of the trained specimen.
Fig. 10 schematically illustrates another exemplary embodiment of the present invention, directed to a method of rehabilitation 700. In general, the method comprises rehabilitating a debilitated specimen to mimic the exemplary response of a demonstration system. Such a method first comprises transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic (Block 710), the computable statements being associated with a perception process. Although in various embodiments the debilitated specimen may be any human or machine specimen, in some embodiments the debilitated specimen may be an impaired human observer. Likewise, although in various embodiments the unimpaired specimen may be any human or machine specimen, in some embodiments the unimpaired specimen may be an unimpaired human observer. Moreover, although the perception process of various embodiments may comprise a variety of perceptions, in some embodiments, the perception process may comprise the visual analysis of at least one vehicle driving scenario.
The computable statements are then incorporated into an expert system (Block 720). The expert system of the perception process is then combined with a neural network corresponding to a perception model so as to form a demonstration system (Block 730). In various embodiments, the perception model may comprise computing saliency across at least one image to generate one or more regions of interest, wherein the expert system selects at least one of the regions of interest for identification of at least one item of interest. Once a demonstration system is available, a scenario may be introduced to the demonstration system so as to elicit an exemplary response (Block 740). A debilitated specimen, also introduced to the scenario, may then be induced to mimic the exemplary response of the demonstration system so as to rehabilitate the debilitated specimen (Block 750). As a result, in some embodiments where the debilitated specimen comprises a debilitated human observer, the unimpaired specimen comprises an unimpaired human observer, and the perception process comprises visual analysis of at least one driving scenario, the rehabilitation system may be used to rehabilitate human drivers that are or have become impaired.
The foregoing merely illustrates how exemplary embodiments of the present invention provide methods of training or rehabilitating specimens. Referring now to Fig. 11, a block diagram of an exemplary electronic device 800 (e.g., mainframe, PC, laptop, PDA, etc.) is shown that is configured to execute a method and computer program product for behavioral training or for rehabilitation. The electronic device may include various modules for performing one or more functions in accordance with exemplary embodiments of the present invention, including those more particularly shown and described herein, wherein such modules may comprise hardware, software, or a combination thereof. It should be understood, however, that the electronic device may include alternative configurations for performing one or more like functions, without departing from the spirit and scope of the present invention. As shown, the electronic device may generally include components, such as a processor, controller, or the like 802 connected to a memory 804, for performing or controlling the various functions of the disclosure. The memory can comprise volatile and/or non-volatile memory, and typically stores content, data or the like. For example, the memory typically stores content transmitted from, and/or received by, the electronic device. Also for example, the memory typically stores software applications, instructions or the like for the processor to perform steps associated with operation of the electronic device in accordance with embodiments of the present invention. In particular, the memory 804 may store computer program code for an application and other computer programs. For example, in one exemplary embodiment of the present invention, the memory may store computer program code for, among other things, transforming psychophysical reactions of an exemplary (or unimpaired) specimen into computable statements using fuzzy logic, the computable statements being associated with a perception process; incorporating the computable statements into an expert system; combining the expert system of the perception process with a neural network corresponding to a perception model, so as to form a behavioral (or demonstration) system; and introducing a stimuli (or scenario) to the behavioral (or demonstration) system and eliciting an exemplary response therefrom, and inducing an untrained (or debilitated) specimen, also introduced to the stimuli (or scenario), to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen (or to rehabilitate the debilitated specimen).
In addition to the memory 804, the processor 802 can also be connected to at least one interface or other component(s) for displaying, transmitting and/or receiving data, content or the like. In this regard, the interface(s) can include at least one communication interface 806 or other component(s) for transmitting and/or receiving data, content or the like. For example, the communication interface may provide for communications with an input device 804. The communication interface may also include at least one user interface that can include a display 808 and/or a user input interface 810. The user input interface, in turn, can comprise any of a number of devices allowing the electronic device to receive data from a user, such as a keypad, a touch display, a joystick or other input device. As described above and as will be appreciated by one skilled in the art, embodiments of the present invention may be configured as a method. Accordingly, embodiments of the present invention may be comprised of various configurations, including entirely of hardware, entirely of software, or any combination of software and hardware. Furthermore, embodiments of the present invention may take the form of a computer program product consisting of a computer-readable storage medium (e.g., the memory 804 of Fig. 3) and computer-readable program instructions stored in the storage medium. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
Exemplary embodiments of the present invention have been described above with reference to block diagrams and flowchart illustrations of methods, apparatuses (i.e., systems) and computer program products. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented in various manners, including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus implement the functions specified in the flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of components for performing the specified functions, combinations of steps for performing the specified functions and program instructions for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. Many modifications and other embodiments of the invention set forth herein will come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims

THAT WHICH IS CLAIMED: 1. A method of behavioral training, comprising: transforming psychophysical reactions of an exemplary specimen into computable statements using fuzzy logic, the computable statements being associated with a perception process; incorporating the computable statements into an expert system; combining the expert system of the perception process with a neural network corresponding to a perception model, so as to form a behavioral system; introducing a stimuli to the behavioral system and eliciting an exemplary response therefrom; and inducing an untrained specimen, also introduced to the stimuli, to mimic the exemplary response so as to train the untrained specimen to display the psychophysical reactions of the exemplary specimen.
2. A method according to Claim 1 , wherein transforming psychophysical reactions further comprises transforming psychophysical reactions of at least one trained human into computable statements using fuzzy logic, the computable statements being associated with a perception process, and wherein inducing an untrained specimen further comprises inducing an untrained human, also introduced to the stimuli, to mimic the exemplary response so as to train the untrained human to display the psychophysical reactions of the trained human.
3. A method according to Claim 2, wherein the psychophysical reactions of the at least one trained human comprise eye scan patterns, and wherein combining the expert system with the neural network to form a behavioral system further comprises combining the expert system with the neural network to form a behavioral system for visually analyzing at least one image via eye scan patterns of the at least one trained human.
4. A method according to Claim 3, wherein visually analyzing at least one image further comprises searching the at least one image for at least one item of interest.
5. A method according to Claim 4, further comprising computing, via the perception model, saliency across the at least one image and generating a plurality of regions of interest, and selecting, via the expert system, at least one of the regions of interest for identifying the at least one item of interest therein.
6. A method according to Claim 5, wherein inducing an untrained human observer further comprises preprocessing the at least one image so as to identify one or more problematic regions of interest that may contain the at least one item of interest, so as to train the untrained human to mimic eye scan patterns of the trained human.
7. A method according to Claim 3, wherein visually analyzing at least one image further comprises visually analyzing at least one image selected from the group consisting of: an image of biological tissue that may indicate defective or cancerous cells; an image that may indicate an improvised explosive device; an x-ray image that may indicate contraband; an image of a product or component that may indicate a defect; and combinations thereof.
8. A method according to Claim 1 , wherein transforming psychophysical reactions further comprises transforming psychophysical reactions of at least one trained human into computable statements using fuzzy logic, the computable statements being associated with a perception process, and wherein inducing an untrained specimen further comprises configuring at least one artificial intelligence system such that, when introduced to the stimuli, the artificial intelligence system mimics the exemplary response so as to display the psychophysical reactions of the at least one trained human.
9. A method according to Claim 8, wherein the psychophysical reactions of the at least one trained human comprise eye scan patterns, and wherein combining the expert system with the neural network to form a behavioral system further comprises combining the expert system with the neural network to form a behavioral system for visually analyzing at least one image via eye scan patterns of the at least one trained human.
10. A method according to Claim 9, wherein visually analyzing at least one image further comprises searching the at least one image for at least one item of interest.
11. A method according to Claim 10, further comprising computing, via the perception model, saliency across the at least one image and generating a plurality of regions of interest, and selecting, via the expert system, at least one of the regions of interest for identifying the at least one item of interest therein.
12. A method according to Claim 9, wherein visually analyzing at least one image further comprises visually analyzing at least one image selected from the group consisting of: an image of biological tissue that may indicate defective or cancerous cells; an image that may indicate a improvised explosive device; an x-ray image that may indicate contraband; an image of a product or component that may indicate a defect; and combinations thereof.
13. A method of rehabilitation, comprising: transforming psychophysical reactions of an unimpaired specimen into computable statements using fuzzy logic, the computable statements being associated with a perception process; incorporating the computable statements into an expert system; combining the expert system of the perception process with a neural network corresponding to a perception model, so as to form a demonstration system; introducing a scenario to the demonstration system and eliciting an exemplary response therefrom; and training a debilitated specimen, also introduced to the scenario, to mimic the exemplary response to thereby rehabilitate the debilitated specimen.
14. A method according to Claim 13, wherein transforming psychophysical reactions further comprises transforming psychophysical reactions of at least one unimpaired human into computable statements using fuzzy logic, the computable statements being associated with a perception process, and wherein training a debilitated specimen further comprises training a debilitated human, also introduced to the scenario, to mimic the exemplary response to thereby rehabilitate the debilitated human.
15. A method according to Claim 14, wherein the psychophysical reactions of the at least one trained human comprise eye scan patterns, and combining the expert system with the neural network to form a demonstration system further comprises combining the expert system with the neural network to form a demonstration system for visually analyzing at least one vehicle driving scenario via eye scan patterns of the at least one trained human.
16. A method according to Claim 15, wherein visually analyzing the at least one vehicle driving scenario further comprises searching the at least one vehicle driving scenario for at least one item of interest.
17. A method according to Claim 16, further comprising computing, via the perception model, saliency across the at least one vehicle driving scenario and generating a plurality of regions of interest, and selecting, via the expert system, at least one of the regions of interest for identifying the at least one item of interest therein.
18. A method according to Claim 17, wherein training a debilitated human further comprises preprocessing the at least vehicle driving scenario so as to identify at least one problematic region of interest that may contain the at least one item of interest, so as to train the debilitated human.
PCT/US2009/047790 2008-06-20 2009-06-18 Training and rehabilitation system, and associated method and computer program product WO2009155415A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US7437908P 2008-06-20 2008-06-20
US61/074,379 2008-06-20

Publications (2)

Publication Number Publication Date
WO2009155415A2 true WO2009155415A2 (en) 2009-12-23
WO2009155415A3 WO2009155415A3 (en) 2010-10-07

Family

ID=41434690

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/047790 WO2009155415A2 (en) 2008-06-20 2009-06-18 Training and rehabilitation system, and associated method and computer program product

Country Status (1)

Country Link
WO (1) WO2009155415A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040834A (en) * 2018-02-22 2020-12-04 因诺登神经科学公司 Eyeball tracking method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724488A (en) * 1994-04-29 1998-03-03 International Business Machines Corporation Fuzzy logic entity behavior profiler
WO2000015104A1 (en) * 1998-09-15 2000-03-23 Scientific Learning Corporation Remediation of depression through computer-implemented interactive behavioral training
WO2006103241A2 (en) * 2005-03-31 2006-10-05 France Telecom System and method for locating points of interest in an object image using a neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724488A (en) * 1994-04-29 1998-03-03 International Business Machines Corporation Fuzzy logic entity behavior profiler
WO2000015104A1 (en) * 1998-09-15 2000-03-23 Scientific Learning Corporation Remediation of depression through computer-implemented interactive behavioral training
WO2006103241A2 (en) * 2005-03-31 2006-10-05 France Telecom System and method for locating points of interest in an object image using a neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Foundations of Augmented Cognition, LECTURE NOTES IN COMPUTER SCIENCE" 22 July 2007 (2007-07-22), Springer Berlin Heidelberg , XP019062988 ISBN: 9783540732150 the whole document *
Gary R. George, Frank Cardullo: "Application of Neuro-Fuzzy Systems to Behavioral Representation in Computer Generated Forces" Conference on Interservice/Industry Training Systems and Education, [Online] November 1999 (1999-11), pages 1-11, XP002569819 link.com/papers1999.html Retrieved from the Internet: URL:http://www.link.com/pdfs/neuro-fuzzy.pdf> [retrieved on 2010-02-22] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040834A (en) * 2018-02-22 2020-12-04 因诺登神经科学公司 Eyeball tracking method and system

Also Published As

Publication number Publication date
WO2009155415A3 (en) 2010-10-07

Similar Documents

Publication Publication Date Title
Arvaneh et al. A P300-based brain-computer interface for improving attention
Abernethy Searching for the minimal essential information for skilled perception and action
Globus Consciousness and brain: I. The identity thesis
Yang et al. Distinct processing for pictures of animals and objects: evidence from eye movements.
England Sensory-motor systems in virtual manipulation
US20220013228A1 (en) Split vision visual test
Guo et al. Eye-tracking for performance evaluation and workload estimation in space telerobotic training
Marshall et al. Combining action observation and motor imagery improves eye–hand coordination during novel visuomotor task performance
Skiba et al. Attentional capture for tool images is driven by the head end of the tool, not the handle
Hopkins et al. Eye movements are captured by a perceptually simple conditioned stimulus in the absence of explicit contingency knowledge.
McCormick et al. Eye gaze metrics reflect a shared motor representation for action observation and movement imagery
Humphreys et al. Neuropsychological evidence for visual-and motor-based affordance: Effects of reference frame and object–hand congruence.
Perry et al. Multiple processes independently predict motor learning
Piña-Ramirez et al. Scenario screen: A dynamic and context dependent P300 stimulator screen aimed at wheelchair navigation control
Humphreys et al. From vision to action and action to vision: A convergent route approach to vision, action, and attention
WO2009155415A2 (en) Training and rehabilitation system, and associated method and computer program product
Hartkop et al. Foraging for handholds: attentional scanning varies by expertise in rock climbing
Chernykh et al. The development of an intelligent simulator system for psychophysiological diagnostics of trainees on the basis of virtual reality
Cos et al. Behavioural and Neural Correlates of Social Pressure during Decision-Making of Precision Reaches
Gajewski et al. The role of saccade targeting in the transsaccadic integration of object types and tokens.
Chattoraj et al. A confirmation bias due to approximate active inference
Manaligod Attentional biases by induced microvalence in novel objects: An emphasis on the role of experience
Hsiao Eye movements in face recognition
Burke et al. Eye and hand movements during reconstruction of spatial memory
Harris et al. The development of visual search behaviours in immersive virtual reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09767724

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09767724

Country of ref document: EP

Kind code of ref document: A2