WO2003067555A1 - A system and method to optimize human associative memory and to enhance human memory - Google Patents

A system and method to optimize human associative memory and to enhance human memory Download PDF

Info

Publication number
WO2003067555A1
WO2003067555A1 PCT/US2003/003832 US0303832W WO03067555A1 WO 2003067555 A1 WO2003067555 A1 WO 2003067555A1 US 0303832 W US0303832 W US 0303832W WO 03067555 A1 WO03067555 A1 WO 03067555A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
memory
associative
pair
review
Prior art date
Application number
PCT/US2003/003832
Other languages
French (fr)
Inventor
Wei Yang
Original Assignee
Mintel Learning Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mintel Learning Technologies, Inc. filed Critical Mintel Learning Technologies, Inc.
Priority to AU2003210929A priority Critical patent/AU2003210929A1/en
Publication of WO2003067555A1 publication Critical patent/WO2003067555A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers

Definitions

  • This invention relates generally to a computer-based system to facilitate human associative memory and to exercise the memory function of the human brain.
  • the system contains a memory engine to optimize the human-computer interaction to generate the best long-term memory results and to optimally exercise the memory function of human brain.
  • Associative memory the memory that A goes with or equals B, is the fundamental component in human intelligence.
  • a and B could be concepts, words or symbols in visual or acoustic format.
  • the process of forming associative memory is associative learning, which is illustrated by the following examples:
  • ⁇ Mastering an anatomy term forming an association between a part of anatomical structure and its corresponding name
  • ⁇ Training an appropriate behavior forming an association between an occasion or environment setting and an appropriate behavior.
  • the formation of a long-term association in the human memory system is a dynamic process and usually requires repeated learning trials of the same pair of stimuli (A, B). Because of the process of forming long-term memory involves several stages and each posses a specific time course, the final results of the long-term memory is highly dependent on the temporal pattern, or the time seriousness of the repetitions. This is like pushing a pendulum which has an intrinsic cycling time determined by its physical properties. To make the pendulum swing with a wide range, briefly pushes it at the right moment with an interval that equals its intrinsic cycling time. Otherwise, no matter how frequently you push it, the ball just vibrates with a small range.
  • golden sequence is defined as the temporal pattern of the simple repetitions of learning a specific stimulus pair (A, B) that generates the best long-term memory results for a specific user.
  • the learning or exercise task can be expressed as forming associations between stimulus pairs like (A1 , B1), (A2, B2) (A ⁇ , Bj) (A N , B N ).
  • A1 , B1 A1 , B1
  • A2, B2 A ⁇ , Bj
  • a N B N
  • the conventional practice is that the human being controls the learning or exercising process.
  • there are many computer software and utilities to facilitate the retrieving and presenting of the learning materials there is no existing utility to smartly optimize the learning sequence according to the temporal dynamics of memory formation of the user.
  • the lacking of such a utility is likely due to the difficulty in detecting the golden sequence for the human learning process. This is more challenging than it appears, due to the fact that the golden sequence varies across individuals and varies across different stimulus pairs. For example, the time intervals of the golden sequence for a user with a better memory is longer than those with a poor memory; the time intervals of the golden sequence of a word with an abstract meaning (e.g. EFFECT) is shorter than those of a word with concrete meaning (e.g. APPLE).
  • An embodiment of the present invention is to optimize the human learning and memory exercising process to achieve the goal of superior learning speed and enhanced memory function. This is mainly achieved by the crucial component of the system — a memory engine to detect in real-time the golden sequences of the stimulus pairs for each user and optimize accordingly the overall learning process.
  • This intelligent system can take the form of web applications, wireless applications, PC applications, hand-held devices, machines or other tangible formats.
  • the memory engine drives the learning process according to the memory status of the learned materials in the user's brain so that the human-computer interaction is resonant.
  • the interaction achieves superior memory results that are far beyond those obtained through regular human learning presently available such as learning without the aid of a computer or through conventional computer systems which do not have the memory engine technology according to the embodiments of the present invention.
  • Another advantage is that the leaning with the memory engine is not just much faster, but also easy and full of fun.
  • the computer tracks the user's memory status in great detail and delivers the materials for review at the right time.
  • Learning with the memory engine has made the most tedious learning process such as building vocabulary easy and fun. This changes the psychology of the language learning users and builds their learning interest and confidence.
  • the human memory system is optimally stimulated, the system becomes more robust. Just like building muscles in the gym under the instruction of a personal trainer, your muscles get much stronger than the work you may do by yourself. Learning with the memory engine enhances memory efficiently. After weeks of regularly using the embodiment of the present system, users feel they can remember things like phone numbers and addresses much better.
  • the system can be implemented as an exercise machine for the memory function of the human brain.
  • Embodiments of the present invention can have the following applications: ⁇ facilitating the development of the memory system of children;
  • the learning process is automatic.
  • the operations of the users are very simple.
  • the learning process has been optimized for the specific user by the system so the user does not have to worry about the arrangement of the learned items. The user can simply respond when instructed, similar to a computer game situation.
  • the very detailed information regarding the user's memory status of the materials learned and the progress statistics are displayed to the user.
  • the memory building process is transparent to the user and thus the user has a strong feeling of achievement.
  • Another object of the embodiment of the present invention is that the user does not forget any material learned in the system if use regularly.
  • the memory engine will detect any learned material that a user is about to forget and present it to the user to review it.
  • the learning is focused on difficult contents. Usually, difficult contents are forgotten quicker.
  • the memory engine will arrange the material to the user to review the difficult contents more frequently.
  • the learning is multidimensional which help the users to transfer the learned skills in real life. With multimedia technology, the materials are presented both visually and acoustically so that the user may learn using the different sensory aspects.
  • Another object of an embodiment of the present invention is that the learning is aimed to build associative memory into habit. Unlike conventional learning in just recalling A given B and vice versa, learning with memory engine will continue to build the long-term memory until the user's response becomes spontaneous and effortless.
  • the human associative learning for forming long-term memory and brain exercising remains to be controlled by humans and the process is far from being efficient.
  • This invention achieves these goals by its memory engine component, which detects in real-time the golden sequences of the learning materials for each user and accordingly optimizes the learning process.
  • This intelligent system can take the form of web applications, wireless applications, PC applications, hand-held devices, machines or other formats
  • Fig. 1 is a schematic diagram of one embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise.
  • Fig. 2 shows a flowchart illustrating one embodiment of the workflow of the present invention.
  • Fig. 3 illustrates a screenshot of the user interface showing step 208 of the flowchart of Fig. 2 where R1 , R2, and R3 represent three responses according to an embodiment of the present invention.
  • Fig. 4 illustrates a screenshot of the user interface showing step 216 of the flowchart of Fig. 2 where R4 and R5 are two responses according to an embodiment of the present invention.
  • Fig. 5 illustrates a screenshot of the user interface showing step 214 of the flowchart of Fig. 2 according to an embodiment of the present invention.
  • Fig. 6 illustrates preliminary testing results of the embodiment illustrated in Fig 1 to Fig. 5.
  • Fig. 7 illustrates the workflow of a spelling practice task according to an embodiment of the present invention.
  • Fig. 8 illustrates the workflow of a listening-comprehension task according to an embodiment of the present invention.
  • Fig. 9 illustrates the workflow and user interfaces for a multiple-choice task according to an embodiment of the present invention.
  • Fig. 10 illustrates the workflow and user interfaces for a fill-in task according to an embodiment of the present invention.
  • THE SYSTEM Fig. 1 is a schematic diagram illustrating an embodiment of the overall invented system.
  • the system can be implemented as Internet web applications, PC applications, or applications on handheld devices.
  • the implemented system can serve multiple users (in the case of web or PC application), or in the case of personalized handheld devices, usually only one user is supported. Because the system optimizes the experience of each user (102), each user's data are separately processed, stored and analyzed. A user logs in to the system so the system can recognize the user and retrieve the user's detailed information and optimize his/her learning process accordingly.
  • the interface component (104) of the system includes an output unit and input unit.
  • the output unit can be a visual display (e.g. computer screen, LCD) or auditory display (e.g. speaker) of the contents.
  • the interface (104) also displays the information related to the user's learning progress and the memory status related information to inform the users of the learning process.
  • the input units are used to intake the user's responses for the presented contents.
  • the input units may be a mouse, keyboard, keypad, joystick, microphone, or other similar devices.
  • the content system (108) stores the contents for learning or for the memory exercise. They are usually associative pairs. These lists can be built-in or made as removable units such as CDs, floppy disks, or memory cards.
  • the contents may comprise a variety of subject matter such as: alphabet learning, word or concept learning (including meaning, pronunciation or foreign language representation), sentence grammar learning, anatomy terms learning, behavioral training (forming an association between an occasion or environment and an appropriate behavior), and other subject matter.
  • the computer processor (106) takes the input from the user interface (104), and delivers back to the interfaces, the contents to present to the user.
  • the above-described components comprise a conventional learning system (100).
  • a crucial component a memory engine (120) is added, to track in the real-time memory status of each user and accordingly optimize the learning sequence to ensure every item is being reviewed at the right moment to achieve the best long-term memory.
  • the user history database (122) stores the very detailed information of a user. In addition to the user's personal profile data as recorded in conventional learning systems, the user history database (122) also stores data about each user's learning history with each pair of stimuli.
  • the history database (122) also sends progress related information of a user to the user interfaces (104).
  • the main output of this history database (122) is to the memory simulator (124) to simulate the user's temporal dynamic memory process of each learned pair in order to determine the golden sequences.
  • the memory simulator (124) sends the results to the golden sequence database (126).
  • the optimizer (128) contains a set of algorithms to optimize the learning process.
  • the set of algorithms are aimed to generate the best long-term memory results based on the golden sequences stored in the golden sequence database (126).
  • the optimizer (128) will take into account the golden sequences of a user and the time schedule of the user so as to arrange the sequence for the best learning result. Therefore, an embodiment of this system does not require the user to study at a specific time to get a good result, instead, it optimizes the learning results given your schedule. In the preferred embodiment, the user studies with it every day.
  • the above description is the overall system and method of the embodiment of the present invention.
  • the detailed operation mechanism of the memory engine (120) will be revealed in the following description of a typical workflow.
  • Fig. 2 illustrates a typical workflow of a user: login, build vocabulary, and logout.
  • login 200
  • the user Upon login (200), the user is identified by the system.
  • the user can then choose a program with specific contents to study (202).
  • the user can customize the contents, such as adding or deleting some contents, or reordering the contents by alphabetical or semantic criteria.
  • the memory engine (120) starts to run and determines which word is presented in the next trial in order to achieve the best long-term memory results (204). For example, the memory engine (120) selects the word INQUISITIVE to be presented in the next trial, it is presented. The user is required to recall its meaning (208). There are three possible results of the recall:
  • R1 (209) Recalled its meaning and sure about the answer
  • R2 (211) Recalled but not sure
  • R3 (213) Have no idea what this word mean.
  • the interface of the present embodiment and its simplest form are illustrated in Fig. 3.
  • the interface of this embodiment and its simplest form are illustrated in Fig. 4. The user indicates if his answer is correct by clicking one of the two buttons R4 (217 or 410), R5 (219, 420).
  • R3 If the user responds R3 (213), the meaning is also presented (214).
  • the interface in this embodiment and its simplest form are presented in Fig. 5.
  • the user will click on the continue button (>) R6 (215 or 510) to move to the next word.
  • the memory engine (120) After each trial, the memory engine (120) will save the data of the current trial to the user history database (122). Based on the history data, the memory engine (120) updates the memory status of the word just being presented, and determines the time interval after which this word should be reviewed (222). This time interval is stored in the golden sequence database (126). The system will then update the progress information (224) sent out to the user interfaces (104). After each trial or a certain number of trials, the system will evaluate the model (228) and adjust the model (226) if it is needed. Then the process moves to the next word unless the user chooses to exit the system (232, 236).
  • the system can also provide additional memory cues to supplement the associative learning.
  • the memory cues can be the visually, acoustically, or semantically related words; the prefix, root and appendix of a compound word; the picture for a visible concept, the sound for an acoustical concept; or a short story, or even a short movie to name a few.
  • an example sentence containing the word is presented to help the memory.
  • the above-described system can be simplified to fit into easy-to-carry handhold devices.
  • the memory engine can be a simple version to handle only one user.
  • the interfaces are the simplest version of the above-described embodiments in Fig. 3 to Fig. 5.
  • the steps (202, 224, 226, 228) in Fig. 2 are options which may drop out for simplicity.
  • the main function of the memory engine (120) is to track in real-time, the memory status of each user and determine the golden sequence for each word learned, and accordingly optimize the learning sequence to insure that each word being reviewed at the time intervals that are close to its golden sequence.
  • the memory simulator (124) is one of two core components of the memory engine (120) according to an embodiment of the present invention. It retrieves the data from the user history database (122) to obtain the user's current memory status of the present word, it takes into account the current learning trial to update its memory status, which determines the next review interval. This interval is sent to the golden sequence database (126), which is used by the process optimizer (128) to arrange the learning sequence. Therefore, the memory simulator (124) generates the intervals of the golden sequence for each word for each user.
  • the golden sequences are the scientific base for the process optimizer (128) to arrange the learning sequence to achieve the best long-term memory results.
  • L(t) 1-exp[(-t-t 0 )/ ⁇ J (2) l o • L(t) is the strength of the long-term association at time r,
  • a review trial is a trial when the user sees the pair A and B again.
  • One crucial difference between a repeated trial and an initial trial is that by the time of the repetition, the long-term association L(t) between A and B, and short-term activation trace l(t) are not 0, due to the residual effect of the initial learning.
  • the residual 5 short-term activation trace is recharged to its current level to full strength 1.
  • Function (4) implies that the lifetime of the activation of an association increases with the long-term association strength. Note that this memory model is not restricted to the specific form of the functions.
  • function (4) can be a power function. It requires systematic effort to identify which one works better.
  • L(t) is the strength of the long-term association at time ;
  • L(t) is the strength of the long-term connection at time t
  • the r is the lifetime of the long-term connection. r / » r s . Taking the long-term memory decay into consideration, for the best result in long-term memory, it is desirable to repeat a pair before the decay of its long- term memory sets in.
  • the golden sequence is the time interval sequence of the best time to review a word after its initial learning, the 1 st repetition, 2 nd repetition, and so on.
  • the dynamic memory model provides a picture of the temporal pattern of long-term memory change. After a learning trial, the long-term memory is consolidated with time constant r s ; after the majority of the consolidation is accomplished, the decay, with time constant of starts and gradually dominates.
  • the best review time intervals, or the golden sequence is:
  • the above format is not the only way of expressing the golden sequence.
  • the parameter A can be of different value, and the Exp function can be substituted by other forms of mathematical functions such as a power function etc.
  • (8) is used as the golden sequence format.
  • the crucial parameter component of the memory model that determines the golden sequence is parameter A, and the EXP function.
  • the user starts with a default golden sequence before the system detects the user's memory power and the difficulty of each word for the user.
  • the default golden sequence are the review intervals for a word with average difficulty and learned by a user with average memory power.
  • the system can simply assign the next review interval as e ⁇ n'1> where n is the trial serial number.
  • n is the trial serial number.
  • the pair is presented for the first time; when n>1 , the trials are repetitions.
  • the golden sequence needs to be adjusted to obtain the best memory results.
  • the first event is testing, in which a user's memory about a word is tested. This will provide the system with information about the user's current memory status of the presented word, based on which the system will update the user's memory status after the learning trial. Meanwhile, the system will compare the user's actual performance with the prediction of the model so the model can be adjusted if the performance is too high or too low.
  • the second event is learning, in which the correct answer is presented for the user to learn. During the learning, the short-term activation is triggered and the activation is further consolidates and contributes to long-term memory.
  • R4 we consider its long-term memory to be the level equivalent to that of a new word that has experienced 8 trials (one learning trial plus 7 reviewing trials) in the system.
  • these initial values are assigned with a rough estimation. The system will work equally well if these initial values are slightly different. Therefore, it dose not matter what the initial value is; after the initial learning trials, the L value is increased by 1 due to the learning and its next review time is set to be long enough to permit a complete consolidation of the shot-term activity.
  • the next review interval for the nth trial is A 0 *Exp(n-1).
  • the parameter A 0 of A 0 *Exp(n) may not be appropriate; 2.
  • the function Exp in A 0 *Exp(n) may not be appropriate;
  • the initial value A 0 ⁇ xp(L 0 +1) for the next review time may not be the best guess
  • the system does not require the user to review each word at exactly the next review time determined by the golden sequence. Instead, it permits the user to follow the user's own schedule, and the system will arrange the learning sequence accordingly to gain the best result.
  • words are overdue when reviewed. That is, the words are actually reviewed very late compared to the best time to review them. Consequently, a word's long-term memory at the review time may not be Ln- ⁇ as predicted by the model.
  • the best estimation is set forth based on testing experience, and by allowing the mechanism to constantly evaluate the model and accordingly calibrate the L value when the model does not predict the user's performance well.
  • the L is calibrated by the following rules:
  • the parameter A can also be adjusted to optimize the memory engine for a user, according to an embodiment of the present invention. If the A value is not appropriate, the L for many words will need frequent adjustment. Accordingly, the A value determines the overall performance.
  • One method of adjusting the system is to ensure that the error rate is within a specific range, e.g. 5% to 10%. A higher error rate indicates that the model over estimated the memory power of a user and the user frequently forgot the items just built. This leads to relearning of the previously learned items, causing frustration in the user. A lower error rate indicates that the model under estimated the user, and it is likely that many items are redundantly reviewed, causing frequent disruption of the ongoing long-term memory consolidation process.
  • the system can evaluate the error rate and adjusts the A value after a certain number of trials, e.g. 100 trials, by the following rule:
  • the function EXP can also be changed to a power function to achieve more flexibility.
  • the parameter A can be adjusted for each word. That is, the error rate for reviewing a specific word is calculated and the A value for that word is accordingly adjusted.
  • the optimizer (128) component of the memory engine optimizes the exact learning process based on the user's schedule and the memory status of each word for a particular user.
  • the optimizer (128) takes data from the golden sequence database (126), so it knows the best times to review each word. Based on such information, it determines which word to be presented in the next trial to achieve the best overall long-term memory results.
  • the optimizer (128) consists of a set of algorithms to arrange the trial sequence
  • the optimizer (128) works in two modes, Learning mode and Review Only mode.
  • the main difference between these two modes is that in Review Only mode, the new words are blocked from being presented, and the user only reviews the words that were learned before. This is preferred when the user expects a break for more than a few days and wants to focus on the words learned to prevent massive forgetting during the break.
  • Fig. 6 shows some testing data obtained at Northwestern Polytechnic University with 40 Chinese students who were studying in the ESL program. The words used were most frequently tested TOEFL words and mostly novel to the students. Most students spent about half-hour per day. There are a few important aspects of the results:
  • the learning speed ranged from 30 words per hour to 100 words per hour. The average speed was 50 words per hour.
  • the progress is mainly linear. That is, if the student acquired sixty words per hour, the student is likely to keep this speed up to hundreds of hours.
  • the recall task can be substituted by other associative learning tasks in either Learning or Review Only modes to make a rich learning experience for the users and provide broad training on various aspects of the associative learning.
  • the learning contents are not limited to vocabulary learning; the contents, for instance, can be any of the associative contents described in the Background of the Invention section above.
  • embodiments of the present invention may be presented with a combination of any of the associative learning tasks and any of the associative contents.
  • associative learning trial takes many different forms wherein an associate pair of items are presented to the user to learn and recall. Some of the possible forms of presenting these associative pairs are listed and illustrated below according to embodiments of the present invention: Variants of the recall task include but are not limited to • Reverse recall task: present B and ask the user to recall A. The user interface and workflow are the same as a recall task illustrated in Fig. 2 to Fig. 6 as embodiments of the present invention.
  • Fig. 7 illustrates a simple workflow for this task according to an embodiment of the present invention.
  • Fig. 8 illustrates a simple workflow for this task according to an embodiment of the present invention.
  • FIG. 9 illustrates a simple user interface to implement this task according to an embodiment of the present invention.
  • Fill in task presenting contexts B for target A and leaving a space for the user to fill A.
  • Fig. 10 illustrates a workflow and simple user interface to implement this task according to an embodiment of the present invention.
  • Game The associative pairs [An, Bn] can be imbedded into game like scenarios to make the associative learning more fun according to an embodiment of the present invention.
  • a general way of extending the system is to separate the learning trials from the reviewing trials, and change the learning trials from a recall task to a more natural learning task -- reading.
  • the system will store these contents and the corresponding associated contents as associative pairs, for the content in building associative memory.
  • the reading task has a momentum which makes it undesirable to interrupt
  • the system When the overdue associative memory of a user is above a certain criterion, the system will recommend building the associative memory. Otherwise, the system will recommend that the user continue reading to learn new materials.
  • the present system can be integrated with reading activities to systematically handle the vocabulary learning in a normal reading process.

Abstract

A system and method to optimize human learning process to achieve superior long-term memory results and to exercise the brain to enhance human memory. The system (100) comprises a display device (104) to present stimuli in visual/acoustical form and display progress information, an input device to receive a user's responses, and a database (108) containing the contents for learning or the stimuli for memory exercising. In addition to the above conventional components, an embodiment of the present invention presents a memory simulator (124) to track the temporal dynamics of human associative memory, and an optimizer (128) to optimize the learning sequence to achieve memory results that are far beyond that which can be achieved by the human brain itself. The system and method can be implemented as a web application, a program in a PC, and through a wireless device or handheld devices such as cell phone, PDA, watch or toy among others. An important feature of an embodiment of the present invention is that the learning/exercising process is optimized by the system and thus automates the process for the user. Due to the optimization according to the temporal dynamics of the human memory process, the process results in superior long-term memory formation and thus, a more robust human memory.

Description

A SYSTEM AND METHOD TO OPTIMIZE HUMAN ASSOCIATIVE MEMORY AND TO ENHANCE HUMAN MEMORY
Inventor: Wei Yang
CROSS REFERENCE TO RELATED APPLICATIONS This application claims benefit of provisional patent application
60/355,544, filed on February 06, 2002.
FIELD OF INVENTION This invention relates generally to a computer-based system to facilitate human associative memory and to exercise the memory function of the human brain. The system contains a memory engine to optimize the human-computer interaction to generate the best long-term memory results and to optimally exercise the memory function of human brain.
BACKGROUND OF THE INVENTION
Associative memory, the memory that A goes with or equals B, is the fundamental component in human intelligence. A and B could be concepts, words or symbols in visual or acoustic format. The process of forming associative memory is associative learning, which is illustrated by the following examples:
■ Learning an alphabet: forming an association between a written letter and its corresponding pronunciation;
■ Learning a new word or concept: forming an association between a written word or concept to its meaning, pronunciation, or to its counterpart represented in a different language;
■ Learning a new sentence in a foreign language: forming associations between the sentence and its meaning;
■ Mastering an anatomy term: forming an association between a part of anatomical structure and its corresponding name; ■ Training an appropriate behavior: forming an association between an occasion or environment setting and an appropriate behavior. The formation of a long-term association in the human memory system is a dynamic process and usually requires repeated learning trials of the same pair of stimuli (A, B). Because of the process of forming long-term memory involves several stages and each posses a specific time course, the final results of the long-term memory is highly dependent on the temporal pattern, or the time seriousness of the repetitions. This is like pushing a pendulum which has an intrinsic cycling time determined by its physical properties. To make the pendulum swing with a wide range, briefly pushes it at the right moment with an interval that equals its intrinsic cycling time. Otherwise, no matter how frequently you push it, the ball just vibrates with a small range.
In the case of associative learning, when you form a long-term memory of A is to B (e.g. A is the English word apple, B is a Chinese word PingGuo, the Chinese word for apple), you need to repeat the trial after a few seconds before you forget about it, then a few minutes, a few hours, a few days, and so on. Form the neurophysiology point of view, there is a best temporal pattern or sequence of the repetitions to form a permanent memory of PingGuo and APPLE most efficiently. Thus, the term golden sequence is defined as the temporal pattern of the simple repetitions of learning a specific stimulus pair (A, B) that generates the best long-term memory results for a specific user.
In normal learning scenarios, either regular learning or clinical training to improve memory, the learning or exercise task can be expressed as forming associations between stimulus pairs like (A1 , B1), (A2, B2) (Aι, Bj) (AN, BN). To achieve the best learning result, it is desirable to arrange the overall learning sequence so that the actual temporal pattern for the repetitions of each stimulus pair is or close to its golden sequence.
The conventional practice is that the human being controls the learning or exercising process. Although there are many computer software and utilities to facilitate the retrieving and presenting of the learning materials, there is no existing utility to smartly optimize the learning sequence according to the temporal dynamics of memory formation of the user. The lacking of such a utility is likely due to the difficulty in detecting the golden sequence for the human learning process. This is more challenging than it appears, due to the fact that the golden sequence varies across individuals and varies across different stimulus pairs. For example, the time intervals of the golden sequence for a user with a better memory is longer than those with a poor memory; the time intervals of the golden sequence of a word with an abstract meaning (e.g. EFFECT) is shorter than those of a word with concrete meaning (e.g. APPLE).
In light of the above discussion, human associative learning and memory exercising, the important aspects of human life, remains to be controlled by human beings in a casual or intuitive manner and the process is far from being efficient. There is a need to scientifically optimize the learning process to empower human memory.
SUMMARY OF THE INVENTION An embodiment of the present invention is to optimize the human learning and memory exercising process to achieve the goal of superior learning speed and enhanced memory function. This is mainly achieved by the crucial component of the system — a memory engine to detect in real-time the golden sequences of the stimulus pairs for each user and optimize accordingly the overall learning process. This intelligent system can take the form of web applications, wireless applications, PC applications, hand-held devices, machines or other tangible formats.
There are several advantages and objects of the embodiments of the present invention which include for instance the memory engine of the system. The memory engine drives the learning process according to the memory status of the learned materials in the user's brain so that the human-computer interaction is resonant. The interaction achieves superior memory results that are far beyond those obtained through regular human learning presently available such as learning without the aid of a computer or through conventional computer systems which do not have the memory engine technology according to the embodiments of the present invention.
Another advantage is that the leaning with the memory engine is not just much faster, but also easy and full of fun. The computer tracks the user's memory status in great detail and delivers the materials for review at the right time. Learning with the memory engine has made the most tedious learning process such as building vocabulary easy and fun. This changes the psychology of the language learning users and builds their learning interest and confidence. In addition when the human memory system is optimally stimulated, the system becomes more robust. Just like building muscles in the gym under the instruction of a personal trainer, your muscles get much stronger than the work you may do by yourself. Learning with the memory engine enhances memory efficiently. After weeks of regularly using the embodiment of the present system, users feel they can remember things like phone numbers and addresses much better.
Therefore, according to other embodiments of the present invention, the system can be implemented as an exercise machine for the memory function of the human brain. Embodiments of the present invention can have the following applications: ■ facilitating the development of the memory system of children;
■ enhancing the memory power of adults;
■ preventing memory deterioration caused by normal aging;
■ preventing or slowing down memory loss at the very early stage of various dementia; ■ rehabilitating brain memory function after various types of brain damage. Furthermore, other advantages of embodiments of the present invention are that the learning process is automatic. The operations of the users are very simple. The learning process has been optimized for the specific user by the system so the user does not have to worry about the arrangement of the learned items. The user can simply respond when instructed, similar to a computer game situation.
Moreover, the very detailed information regarding the user's memory status of the materials learned and the progress statistics are displayed to the user. The memory building process is transparent to the user and thus the user has a strong feeling of achievement.
Another object of the embodiment of the present invention is that the user does not forget any material learned in the system if use regularly. The memory engine will detect any learned material that a user is about to forget and present it to the user to review it. In addition, the learning is focused on difficult contents. Usually, difficult contents are forgotten quicker. The memory engine will arrange the material to the user to review the difficult contents more frequently. Furthermore, another object is that the learning is multidimensional which help the users to transfer the learned skills in real life. With multimedia technology, the materials are presented both visually and acoustically so that the user may learn using the different sensory aspects.
Another object of an embodiment of the present invention is that the learning is aimed to build associative memory into habit. Unlike conventional learning in just recalling A given B and vice versa, learning with memory engine will continue to build the long-term memory until the user's response becomes spontaneous and effortless.
The human associative learning for forming long-term memory and brain exercising remains to be controlled by humans and the process is far from being efficient. There exists a need for a system to optimize the associative learning process to gain superior learning speed and enhance the memory function. This invention achieves these goals by its memory engine component, which detects in real-time the golden sequences of the learning materials for each user and accordingly optimizes the learning process. This intelligent system can take the form of web applications, wireless applications, PC applications, hand-held devices, machines or other formats
These and other embodiments of the present invention are further made apparent, in the remainder of the present document, to those of ordinary skill in the art.
BRIEF DESCRIPTION OF THE DRAWINGS In order to more fully describe embodiments of the present invention, reference is made to the accompanying drawings. These drawings are not to be considered limitations in the scope of the invention, but are merely illustrative.
Fig. 1 is a schematic diagram of one embodiment of the overall structure of the present invention as a dynamic system to optimize human associative learning and memory exercise.
Fig. 2 shows a flowchart illustrating one embodiment of the workflow of the present invention.
Fig. 3 illustrates a screenshot of the user interface showing step 208 of the flowchart of Fig. 2 where R1 , R2, and R3 represent three responses according to an embodiment of the present invention.
Fig. 4 illustrates a screenshot of the user interface showing step 216 of the flowchart of Fig. 2 where R4 and R5 are two responses according to an embodiment of the present invention.
Fig. 5 illustrates a screenshot of the user interface showing step 214 of the flowchart of Fig. 2 according to an embodiment of the present invention.
Fig. 6 illustrates preliminary testing results of the embodiment illustrated in Fig 1 to Fig. 5.
Fig. 7 illustrates the workflow of a spelling practice task according to an embodiment of the present invention.
Fig. 8 illustrates the workflow of a listening-comprehension task according to an embodiment of the present invention. Fig. 9 illustrates the workflow and user interfaces for a multiple-choice task according to an embodiment of the present invention.
Fig. 10 illustrates the workflow and user interfaces for a fill-in task according to an embodiment of the present invention.
DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
The description above and below and the drawings of the present document focus on one or more currently preferred embodiments of the present invention and also describe some exemplary optional features and/or alternative embodiments. The description and drawings are for the purpose of illustration and not limitation. Those of ordinary skill in the art would recognize variations, modifications, and alternatives. Such variations, modifications, and alternatives are also within the scope of the present invention. Section titles are terse and are for convenience only.
THE SYSTEM Fig. 1 is a schematic diagram illustrating an embodiment of the overall invented system. The system can be implemented as Internet web applications, PC applications, or applications on handheld devices. The implemented system can serve multiple users (in the case of web or PC application), or in the case of personalized handheld devices, usually only one user is supported. Because the system optimizes the experience of each user (102), each user's data are separately processed, stored and analyzed. A user logs in to the system so the system can recognize the user and retrieve the user's detailed information and optimize his/her learning process accordingly.
The interface component (104) of the system includes an output unit and input unit. The output unit can be a visual display (e.g. computer screen, LCD) or auditory display (e.g. speaker) of the contents. The interface (104) also displays the information related to the user's learning progress and the memory status related information to inform the users of the learning process. The input units are used to intake the user's responses for the presented contents. The input units may be a mouse, keyboard, keypad, joystick, microphone, or other similar devices.
The content system (108) stores the contents for learning or for the memory exercise. They are usually associative pairs. These lists can be built-in or made as removable units such as CDs, floppy disks, or memory cards. The contents may comprise a variety of subject matter such as: alphabet learning, word or concept learning (including meaning, pronunciation or foreign language representation), sentence grammar learning, anatomy terms learning, behavioral training (forming an association between an occasion or environment and an appropriate behavior), and other subject matter.
The computer processor (106) takes the input from the user interface (104), and delivers back to the interfaces, the contents to present to the user. The above-described components comprise a conventional learning system (100). In accordance with an embodiment of the present invention, a crucial component — a memory engine (120) is added, to track in the real-time memory status of each user and accordingly optimize the learning sequence to ensure every item is being reviewed at the right moment to achieve the best long-term memory. The user history database (122) stores the very detailed information of a user. In addition to the user's personal profile data as recorded in conventional learning systems, the user history database (122) also stores data about each user's learning history with each pair of stimuli. The history database (122) also sends progress related information of a user to the user interfaces (104). The main output of this history database (122) is to the memory simulator (124) to simulate the user's temporal dynamic memory process of each learned pair in order to determine the golden sequences. The memory simulator (124) sends the results to the golden sequence database (126).
The optimizer (128) contains a set of algorithms to optimize the learning process. The set of algorithms are aimed to generate the best long-term memory results based on the golden sequences stored in the golden sequence database (126). The optimizer (128) will take into account the golden sequences of a user and the time schedule of the user so as to arrange the sequence for the best learning result. Therefore, an embodiment of this system does not require the user to study at a specific time to get a good result, instead, it optimizes the learning results given your schedule. In the preferred embodiment, the user studies with it every day. The above description is the overall system and method of the embodiment of the present invention. The detailed operation mechanism of the memory engine (120) will be revealed in the following description of a typical workflow.
THE WORKFLOW
The following description will first describe a usage session through an embodiment of this system to illustrate the details of the resonant human- computer interaction process achieved by this invention. The next part will describe the core component of this invention - memory engine (120) and its functions.
The workflow is illustrated as shown in Fig. 2, by a common associative learning task - recall task (210) and a common associative learning contents - English vocabulary. Other forms of associative learning tasks will be described herein. In accordance with an embodiment of the present invention, Fig. 2 illustrates a typical workflow of a user: login, build vocabulary, and logout. Upon login (200), the user is identified by the system. The user can then choose a program with specific contents to study (202). The user can customize the contents, such as adding or deleting some contents, or reordering the contents by alphabetical or semantic criteria.
Once the user starts the learning process, the memory engine (120) starts to run and determines which word is presented in the next trial in order to achieve the best long-term memory results (204). For example, the memory engine (120) selects the word INQUISITIVE to be presented in the next trial, it is presented. The user is required to recall its meaning (208). There are three possible results of the recall:
R1 (209): Recalled its meaning and sure about the answer; R2 (211): Recalled but not sure; R3 (213): Have no idea what this word mean.
The interface of the present embodiment and its simplest form are illustrated in Fig. 3. The user clicks on one of the three response buttons to indicate the recall result. If the user responds R1 (209 or 310) or R2 (211 or 320), the explanation of the word is presented (216). The interface of this embodiment and its simplest form are illustrated in Fig. 4. The user indicates if his answer is correct by clicking one of the two buttons R4 (217 or 410), R5 (219, 420).
If the user responds R3 (213), the meaning is also presented (214). The interface in this embodiment and its simplest form are presented in Fig. 5. The user will click on the continue button (>) R6 (215 or 510) to move to the next word.
After each trial, the memory engine (120) will save the data of the current trial to the user history database (122). Based on the history data, the memory engine (120) updates the memory status of the word just being presented, and determines the time interval after which this word should be reviewed (222). This time interval is stored in the golden sequence database (126). The system will then update the progress information (224) sent out to the user interfaces (104). After each trial or a certain number of trials, the system will evaluate the model (228) and adjust the model (226) if it is needed. Then the process moves to the next word unless the user chooses to exit the system (232, 236).
The system can also provide additional memory cues to supplement the associative learning. In the case of vocabulary learning, the memory cues can be the visually, acoustically, or semantically related words; the prefix, root and appendix of a compound word; the picture for a visible concept, the sound for an acoustical concept; or a short story, or even a short movie to name a few. In the illustrated embodiment of the present invention, an example sentence containing the word is presented to help the memory.
A SIMPLIFIED SYSTEM
The above-described system can be simplified to fit into easy-to-carry handhold devices. The memory engine can be a simple version to handle only one user. The interfaces are the simplest version of the above-described embodiments in Fig. 3 to Fig. 5. In an embodiment, the steps (202, 224, 226, 228) in Fig. 2 are options which may drop out for simplicity.
THE MEMORY ENGINE As described above, according to an embodiment of the present invention, the main function of the memory engine (120) is to track in real-time, the memory status of each user and determine the golden sequence for each word learned, and accordingly optimize the learning sequence to insure that each word being reviewed at the time intervals that are close to its golden sequence.
THE MEMORY SIMULATOR
The memory simulator (124) is one of two core components of the memory engine (120) according to an embodiment of the present invention. It retrieves the data from the user history database (122) to obtain the user's current memory status of the present word, it takes into account the current learning trial to update its memory status, which determines the next review interval. This interval is sent to the golden sequence database (126), which is used by the process optimizer (128) to arrange the learning sequence. Therefore, the memory simulator (124) generates the intervals of the golden sequence for each word for each user. The golden sequences are the scientific base for the process optimizer (128) to arrange the learning sequence to achieve the best long-term memory results.
Memory model: memory mechanism of initial learning Suppose a new word A is presented (208) to a user and the user does not know its meaning B, this means that there is no long-term memory association between A and B, expressed as L=0. The user responds R3 (213) to indicate that the user does not know this word. Then, the system presents its meaning B (214). When user sees both A and B, the co-activation of the representations of A and B in the user's brain initiates a temporary activation trace as a short-term association between them. The time course of the persistence of the association is expressed as: l(t) = Exp(-t/τs) (1) • l(t) is the level of the short-term activity at time t
• τs is the lifetime of the short-term association. It is the time it takes for the association to decay from full level 1 to 1/e.
• Note that in function (1), the activation rising process is not
5 considered in the present model because its time course is much shorter than the decay lifetime. The above short-term activation consolidates to long-term connection L. The time course of the consolidation can be described as:
L(t) = 1-exp[(-t-t0)/τJ (2) l o • L(t) is the strength of the long-term association at time r,
• τc is the time constant of the consolidation;
• to is the delay of the consolidation relative to activation. For simplicity, the model assume t0 - 0 and rc = rs Thus,
L(t) = 1-exp(-t/τs) = /l(t)dt (3)
15 Thus, a complete consolidation results in long-term memory strength 1.
After the initial learning of a new word, the long-term association between A and B can finally reach to the level of L=1.
Memory model: memory mechanism of repetition 0 A review trial is a trial when the user sees the pair A and B again. One crucial difference between a repeated trial and an initial trial is that by the time of the repetition, the long-term association L(t) between A and B, and short-term activation trace l(t) are not 0, due to the residual effect of the initial learning. When the representation of A and B are co-activated again, the residual 5 short-term activation trace is recharged to its current level to full strength 1. The decay time constant in this short-term activation is determined by the current strength of the long-term association L: τs = A*Exp(L) (4)
• A is a scaling factor. In the current embodiment, A=1. This 0 parameter can be adjusted.
Function (4) implies that the lifetime of the activation of an association increases with the long-term association strength. Note that this memory model is not restricted to the specific form of the functions. For example, function (4) can be a power function. It requires systematic effort to identify which one works better.
Again, the short-term activation continues to consolidate to an existing long-term connection. Thus: L(t) = L+ {1-exp[(-t-to)/τc)} (5)
• /. is the current strength of the Long-term association;
• L(t) is the strength of the long-term association at time ;
• τc is the time constant of the consolidation;
• to is the delay of the consolidation relative to activation. For simplicity, the model assumes to = 0 and rc= rs Thus,
L(t) = L+ {1-exp[(-t)/τs )]} = L + fl(t)dt (6)
Memory model: decay of long-term memory
The long-term association is also subject to decay but with a much longer time course:
L(t) = L(0) *[1-Exp(-t/τ,)] (7)
• L(t) is the strength of the long-term connection at time t;
• L(0) is the strength of the long-term connection at time t=0
• The r, is the lifetime of the long-term connection. r/ » rs. Taking the long-term memory decay into consideration, for the best result in long-term memory, it is desirable to repeat a pair before the decay of its long- term memory sets in.
Memory model: generation of golden sequence The description below describes how to determine the best time to review a word according to the above dynamic memory model as an embodiment of the present invention. The golden sequence is the time interval sequence of the best time to review a word after its initial learning, the 1st repetition, 2nd repetition, and so on. The dynamic memory model provides a picture of the temporal pattern of long-term memory change. After a learning trial, the long-term memory is consolidated with time constant rs; after the majority of the consolidation is accomplished, the decay, with time constant of starts and gradually dominates.
If a repetition occurs before the completion of the consolidation of the previous activation trace, the previous trace will be recharged to full strength before a full consolidation. Thus, it is desirable to repeat a trial after the consolidation of the previous activation trace is complete, and before the decay has become significant. This time is estimated to be the lifetime of the short-term activation trace, where the activation has mainly consolidated 2/3 and the decay of the long-term memory does not dominate the process. Thus, the best review time is rs= A*Exp(L). For the first learning trial, L=0 before the learning trial, and increments by 1 after each learning trial because of a complete consolidation.
Accordingly, the best review time intervals, or the golden sequence is:
[A *Exp(0), A *Exp(1), A *Exp(2), ... A *Exp(n), ... ] (8)
The above format is not the only way of expressing the golden sequence. For example, the parameter A can be of different value, and the Exp function can be substituted by other forms of mathematical functions such as a power function etc. In the present preferred embodiment, (8) is used as the golden sequence format.
Memory model: the default golden sequence
As illustrated in golden sequence (8), the crucial parameter component of the memory model that determines the golden sequence is parameter A, and the EXP function. When a new user begins the learning process, the user starts with a default golden sequence before the system detects the user's memory power and the difficulty of each word for the user. The default golden sequence are the review intervals for a word with average difficulty and learned by a user with average memory power.
In this embodiment, A = 1 sec, thus the default golden sequence is:
[1 , e, e2, e3, e(n-1>, ...] (9) For a new word, the system can simply assign the next review interval as e<n'1> where n is the trial serial number. When n=1 , the pair is presented for the first time; when n>1 , the trials are repetitions. There are several occasions where the golden sequence needs to be adjusted to obtain the best memory results.
• The user has some memory about a word when it is presented for the first time;
• The word is more difficult or easier; • The user has memory power that is above or below average;
How the golden sequence accommodates each of these variances is next described.
Processing of a word in an initial trial In accordance with an embodiment of the present invention, not all the words in this system are completely new to the user. In many cases, the user has some existing memory about some words, which have been encountered elsewhere. The system handles this situation by testing each word first and assessing its long-term memory status before the learning process begins in this system.
Note that in each trial, two events occur. The first event is testing, in which a user's memory about a word is tested. This will provide the system with information about the user's current memory status of the presented word, based on which the system will update the user's memory status after the learning trial. Meanwhile, the system will compare the user's actual performance with the prediction of the model so the model can be adjusted if the performance is too high or too low. The second event is learning, in which the correct answer is presented for the user to learn. During the learning, the short-term activation is triggered and the activation is further consolidates and contributes to long-term memory.
As illustrated in Fig. 2, there are five possible response scenarios in a trial. In accordance with the present embodiment, the initial L value is assigned accordingly: 1 : R1 (Know for sure) followed by R4 (Correct); L0 = 50;
2: R2 (Know but not sure) followed by R4 (Correct); L0 = 8;
3: R3 (No idea) followed by R6 (Continue); L0 = 0;
4: R2 (Know but not sure) followed by R5 (Wrong); L0 = 0; 5: R1 (Know for sure) followed by R5 (Wrong); L0 = 0
For example, if the user chooses R4 after R2, we consider its long-term memory to be the level equivalent to that of a new word that has experienced 8 trials (one learning trial plus 7 reviewing trials) in the system. Typically these initial values are assigned with a rough estimation. The system will work equally well if these initial values are slightly different. Therefore, it dose not matter what the initial value is; after the initial learning trials, the L value is increased by 1 due to the learning and its next review time is set to be long enough to permit a complete consolidation of the shot-term activity.
Adjustment of the golden sequence to word difficulty
According to (9), the next review interval for the nth trial is A0*Exp(n-1). There are a few cases where such estimation is not accurate: 1. The parameter A0 of A0*Exp(n) may not be appropriate; 2. The function Exp in A0*Exp(n) may not be appropriate;
3. The initial value A0Εxp(L0+1) for the next review time may not be the best guess;
4. The system does not require the user to review each word at exactly the next review time determined by the golden sequence. Instead, it permits the user to follow the user's own schedule, and the system will arrange the learning sequence accordingly to gain the best result. Thus, there are cases in which words are overdue when reviewed. That is, the words are actually reviewed very late compared to the best time to review them. Consequently, a word's long-term memory at the review time may not be Ln-ι as predicted by the model.
Typically, a large amount of experimentation is required to obtain the best estimate of the above parameters. In this embodiment, the best estimation is set forth based on testing experience, and by allowing the mechanism to constantly evaluate the model and accordingly calibrate the L value when the model does not predict the user's performance well.
In a repeated trial, the L is calibrated by the following rules:
1 : R1 (Know for sure) followed by R4 (Correct); L = L; 2: R2 (Know but not sure) followed by R4 (Correct); L = L - 0.5;
3: R3 (No idea) followed by R6 (Continue); L = L - 6;
4: R2 (Know but not sure) followed by R5 (Wrong); L = L - 6;
5: R1 (Know for sure) followed by R5 (Wrong); L = L - 6;
After the calibration, L is added by 1 due to the learning scenario in a trial.
Other Adjustments
In addition to the L calibration described above, the parameter A can also be adjusted to optimize the memory engine for a user, according to an embodiment of the present invention. If the A value is not appropriate, the L for many words will need frequent adjustment. Accordingly, the A value determines the overall performance. One method of adjusting the system is to ensure that the error rate is within a specific range, e.g. 5% to 10%. A higher error rate indicates that the model over estimated the memory power of a user and the user frequently forgot the items just built. This leads to relearning of the previously learned items, causing frustration in the user. A lower error rate indicates that the model under estimated the user, and it is likely that many items are redundantly reviewed, causing frequent disruption of the ongoing long-term memory consolidation process. Thus, the system can evaluate the error rate and adjusts the A value after a certain number of trials, e.g. 100 trials, by the following rule:
• If Error rate >10% and A>0, A=A-0.5;
• If Error rate <5%, A=A+0.5;
When necessary, the function EXP can also be changed to a power function to achieve more flexibility.
The above describes how the memory model is adjusted at a global level. At a more detailed level, the parameter A can be adjusted for each word. That is, the error rate for reviewing a specific word is calculated and the A value for that word is accordingly adjusted.
THE OPTIMIZER The optimizer (128) component of the memory engine optimizes the exact learning process based on the user's schedule and the memory status of each word for a particular user. The optimizer (128) takes data from the golden sequence database (126), so it knows the best times to review each word. Based on such information, it determines which word to be presented in the next trial to achieve the best overall long-term memory results. The optimizer (128) consists of a set of algorithms to arrange the trial sequence
Furthermore, in an embodiment of the present invention, the optimizer (128) works in two modes, Learning mode and Review Only mode. The main difference between these two modes is that in Review Only mode, the new words are blocked from being presented, and the user only reviews the words that were learned before. This is preferred when the user expects a break for more than a few days and wants to focus on the words learned to prevent massive forgetting during the break.
Learning Mode: • If there is no item due for review, present a new item by the default order or the order selected by user.
• If there is only one item due:
1. Present it at the next trial and update its' memory status.
2. Update the next reviewing time by adding the next review interval to the previous review time.
• If there are multiple items due for review:
1. Select the due item that has the minimum review time.
3. Present it and update the L value according to the actual performance. 4. Present it at the next trial and update its' memory status.
5. Update the next reviewing time by adding the next review interval to the previous review time.
2. Returns back to step 1 until no item is due. Reviewing Only Mode:
1. New items are blocked. Present only the items that were learned before.
2. Select the item that has the minimum review time. 3. Present the item and adjust the L value. The amount of adjustment has been divided by half.
4. Update the next reviewing time by adding the next review interval to the previous review time.
5. Repeat step 2 until exit.
Example 1 : PRELIMILARY RESULTS
Fig. 6 shows some testing data obtained at Northwestern Polytechnic University with 40 Chinese students who were studying in the ESL program. The words used were most frequently tested TOEFL words and mostly novel to the students. Most students spent about half-hour per day. There are a few important aspects of the results:
1. The learning speed ranged from 30 words per hour to 100 words per hour. The average speed was 50 words per hour.
2. When the student is learning new words, the student does not forget previously learned words because the memory engine will ensure that all the learned words are still in memory before presenting new words.
3. The progress is mainly linear. That is, if the student acquired sixty words per hour, the student is likely to keep this speed up to hundreds of hours.
4. Two student's acquisition speed increased with hours of using the system. This indicates that the students' memory power is enhanced. The system users have better memory when attempting to remember other things such as addresses and phone numbers.
VARIATIONS The above illustrations describe embodiments of the system and method with a recall task in both Learning and Review Only mode. The illustrated system, however, is not the only utilization of the present invention. In an embodiment, the recall task can be substituted by other associative learning tasks in either Learning or Review Only modes to make a rich learning experience for the users and provide broad training on various aspects of the associative learning. The learning contents are not limited to vocabulary learning; the contents, for instance, can be any of the associative contents described in the Background of the Invention section above. Thus, embodiments of the present invention may be presented with a combination of any of the associative learning tasks and any of the associative contents.
Other forms of associative learning An associative learning trial takes many different forms wherein an associate pair of items are presented to the user to learn and recall. Some of the possible forms of presenting these associative pairs are listed and illustrated below according to embodiments of the present invention: Variants of the recall task include but are not limited to • Reverse recall task: present B and ask the user to recall A. The user interface and workflow are the same as a recall task illustrated in Fig. 2 to Fig. 6 as embodiments of the present invention.
• Spelling task: the system presents B visually or acoustically, asks the user to type or spell out A. Fig. 7 illustrates a simple workflow for this task according to an embodiment of the present invention.
• Listening comprehension task: the system presents A acoustically and asks the user to report B. Fig. 8 illustrates a simple workflow for this task according to an embodiment of the present invention.
Other associative learning tasks for presenting associative pairs may include: • Multiple choice task: presenting A, its' associates B and a few non-related items C, D, and E are presented together. The user indicates among B, C, D, and E, which one is associated with A. Fig. 9 illustrates a simple user interface to implement this task according to an embodiment of the present invention. • Fill in task: presenting contexts B for target A and leaving a space for the user to fill A. Fig. 10 illustrates a workflow and simple user interface to implement this task according to an embodiment of the present invention. • Game: The associative pairs [An, Bn] can be imbedded into game like scenarios to make the associative learning more fun according to an embodiment of the present invention.
5 EXTENSION OF THE APPLICATION
According to another embodiment of the present invention, a general way of extending the system is to separate the learning trials from the reviewing trials, and change the learning trials from a recall task to a more natural learning task -- reading. Whenever the user encounters a word that has a meaning the user l o does not know, a word that the user is not sure how to pronounce, or a sentence that the user cannot understand, the user can simply indicate so on the system interface. The system will store these contents and the corresponding associated contents as associative pairs, for the content in building associative memory.
The reading task has a momentum which makes it undesirable to interrupt
15 the task frequently by the reviewing trials, thus learning in such a general system has two main working modules: one is to read the learning materials with all the assistance for difficulties; the other is the associative memory building module, where the user systematically builds the associative memory for the contents found difficult in the reading. 0 The system will recommend which working module the user should be in.
When the overdue associative memory of a user is above a certain criterion, the system will recommend building the associative memory. Otherwise, the system will recommend that the user continue reading to learn new materials.
Although the description above contains many specificities, these should 5 not be constructed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments of this invention. For example, the present system can be integrated with reading activities to systematically handle the vocabulary learning in a normal reading process.
0 Throughout the description and drawings, example embodiments are given with reference to specific configurations. It will be appreciated by those of ordinary skill in the art that the present invention can be embodied in other specific forms. Those of ordinary skill in the art would be able to practice such other embodiments without undue experimentation. The scope of the present invention, for the purpose of the present patent document, is not limited merely to the specific example embodiments of the foregoing description, but rather is indicated by the appended claims. All changes that come within the meaning and range of equivalents within the claims are intended to be considered as being embraced within the spirit and scope of the claims.

Claims

What is claimed is:
1. A computer facilitated system to optimize associative learning and for memory exercising in a user, comprising:
5 an interface having an input device and output device, wherein the interface is capable of presenting visual or audio information and receiving a plurality of user responses; a computer processor for processing said information and responses; a contents database for storing subject matter to present to the user in the lθ form of an associative pair; and a memory engine; wherein when a user is presented a plurality of associative pairs retrieved from the contents database, a trial and a memory status is created in the memory engine based on a response by the user to the plurality of associative pairs; and 15 the memory engine determines a real-time optimal sequence and order for presenting the user with the plurality of associative pairs again and a plurality of new associative pairs.
2. The system of claim 1 , wherein the associative pairs are a pairing of a 0 language word and a definition.
3. The system of claim 1 , wherein the contents database is a removable unit and the subject matter is stored on a mobile memory device.
5 4. The system of claim 1 , wherein the memory engine further comprises: a user history database for recording a user profile, a plurality of results of the user recalling an associative pair, and a plurality of trial-related information; a memory simulator for processing a memory status of an associative pair based on the results in the user history database, to determine a best review 0 interval for an associative pair; a sequences database for storing a plurality of best review intervals for a plurality of associative pairs presented to the user; and a process optimizer comprising a set of algorithms for retrieving a data output from the sequences database to determine an associative pair to next present the user; thereby achieving a best long-term memory result.
5. The system of claim 4, wherein the process optimizer incorporates a user- defined study schedule into determining the optimal time and order for presenting associative pairs to the user.
6. The system of claim 4, that is implemented using a web application, PC program, wireless device, personal digital assistant or toy.
7. The system of claim 4, further comprising the interface displaying information indicating a progress of the user and memory status in real-time as obtained from the history database.
8. The system of claim 4, wherein the sequences database stores the plurality of best review intervals based on a dynamic memory model.
9. The system of claim 8, wherein an initial result from the user recalling an associative pair is compared to a prediction of the memory model and the memory model is thereby adjusted.
10. The system of claim 8, wherein the user history database further tracks an error rate in the plurality of results of the user and adjusts the memory model to maintain the error rate within a range.
11. The system of claim 10, wherein the range of the error rate is within about 5 to about 10 percent.
12. The system of claim 4, wherein the process optimizer further comprises a learning mode, to determine whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios comprising: when no past associative pair is due for review, when only one past associative pair is due for review, and when a plurality of past associate pairs are due for review, wherein the memory status and review interval are updated for each scenario after the user is presented with and responds to a past associative pair or new associative pair.
13. The system of claim 4, wherein the process optimizer further comprises a review only mode for presenting the user with only a plurality of past associative pairs whereby a new associative pair is blocked from being presented; and the memory status and review interval are updated after the user is presented with and responds to a past associative pair.
14. A computer implemented method for optimizing associative learning and memory exercising for a user comprising:
(a) identifying a user through a profile data inputted by the user through an interface and storing said profile data in a user history database;
(b) presenting the user with a plurality of associative pairs from a program of study selected by the user in a contents database;
(c) requesting the user to recall an associative pair;
(d) recording a response by the user in recalling the associative pair and ascertaining a memory status for the associative pair with a memory simulator;
(e) determining a best review interval and sending the review interval to a sequences database;
(f) storing a plurality of best review intervals in the sequences database and sending an output from the sequences database to a process optimizer; (g) determining a time and order for presenting a next associative pair to the user using a set of algorithms in the process optimizer and the output from the sequences database, (h) presenting the user a previously presented associative pair or a new associative pair as determined by the process optimizer; such that the user achieves a best long-term memory result.
15. The method of claim 14, wherein an associative pair has a first and second element and the response by the user is selected from an interface presenting only a first or a second element of said associative pair and a plurality of choices comprising: a sure recall; an unsure recall; and no recall; wherein if the user selects a sure recall or an unsure recall, the first or second element not presented on the interface is then presented and the user indicates whether the response is correct; and wherein if the user selects a no recall, the first or second element not presented on the interface is then presented and the user indicates to continue to a next associative pair.
16. The method of claim 14, further comprising: repeating steps 14(c) through 14(h) as a trial; updating the memory status in the history database; updating the review interval in the sequences database; adjusting the plurality of best review intervals in the sequences database; adjusting the time and order for presenting the next associate pair; storing the memory status in the user history database for future retrieval by the user.
17. The method of claim 16, further comprising: activating a learning mode to determine whether to present a past associative pair or a new associative pair to the user under a set of learning scenarios comprising: when no past associative pair is due for review, when only one past associative pair is due for review, and when a plurality of past associate pairs are due for review; wherein the memory status and review interval are updated for each scenario after the user is presented with and responds to a past associative pair or new associative pair.
18. The method of claim 16, further comprising: selecting a review only mode for presenting the user with only a plurality of past associative pairs whereby a new associative pair is blocked from being presented; and updating the memory status and review interval after the user is presented with and responds to a past associative pair.
19. The method of claim 14, that is implemented using a web application, PC program, wireless device, personal digital assistant or toy.
20. A computer facilitated apparatus for optimizing associative learning and memory exercising for a user comprising: an interface means having an input device and output device, wherein the interface means is capable of presenting and retrieving visual or audio information; a computer processing means for processing said information; a means for storing subject matter to present to the user in the form of an associative pair; a memory engine means further having a means for recording and storing a user profile and a result of the user recalling an associative pair to be stored as a memory status; a memory simulating means for processing the memory status to determine a plurality of review intervals; a means for storing and updating a plurality of best review intervals for a plurality of associative pairs; and a process optimizing means comprising a set of algorithms to determine whether to present a past associative pair or a new associative pair to the user; wherein as the memory status is updated, the best review intervals are updated, and the process optimizing means determines an optimal time and an order for presenting a past associative pair or a new associative pair to the user thereby achieving a best long-term memory result.
PCT/US2003/003832 2002-02-06 2003-02-05 A system and method to optimize human associative memory and to enhance human memory WO2003067555A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003210929A AU2003210929A1 (en) 2002-02-06 2003-02-05 A system and method to optimize human associative memory and to enhance human memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US35554402P 2002-02-06 2002-02-06
US60/355,544 2002-02-06

Publications (1)

Publication Number Publication Date
WO2003067555A1 true WO2003067555A1 (en) 2003-08-14

Family

ID=27734530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/003832 WO2003067555A1 (en) 2002-02-06 2003-02-05 A system and method to optimize human associative memory and to enhance human memory

Country Status (2)

Country Link
AU (1) AU2003210929A1 (en)
WO (1) WO2003067555A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8107605B2 (en) 2007-09-26 2012-01-31 Hill-Rom Sas Memory aid for persons having memory loss
CN106777354A (en) * 2017-01-17 2017-05-31 腾讯科技(深圳)有限公司 Promotion message freshness determines method and device
WO2018065543A1 (en) * 2016-10-05 2018-04-12 Koninklijke Philips N.V. Apparatus and method of operating the apparatus to guide a user in a long-term memory training session
WO2018075132A1 (en) * 2016-10-20 2018-04-26 Hrl Laboratories, Llc A closed-loop model-based controller for accelerating memory and skill acquisition
CN110097484A (en) * 2019-04-28 2019-08-06 赵玉芝 It is a kind of to prevent the assisted class multicore forgotten driving memory engine
US10736561B2 (en) 2015-10-23 2020-08-11 Hrl Laboratories, Llc Neural model-based controller
US10796596B2 (en) 2015-08-27 2020-10-06 Hrl Laboratories, Llc Closed-loop intervention control system
CN111861819A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method for evaluating user memory level in intelligent dictation and electronic equipment
US11344723B1 (en) 2016-10-24 2022-05-31 Hrl Laboratories, Llc System and method for decoding and behaviorally validating memory consolidation during sleep from EEG after waking experience

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577919A (en) * 1991-04-08 1996-11-26 Collins; Deborah L. Method and apparatus for automated learning and performance evaluation
US5727950A (en) * 1996-05-22 1998-03-17 Netsage Corporation Agent based instruction system and method
US6212358B1 (en) * 1996-07-02 2001-04-03 Chi Fai Ho Learning system and method based on review
US6419496B1 (en) * 2000-03-28 2002-07-16 William Vaughan, Jr. Learning method
US6551109B1 (en) * 2000-09-13 2003-04-22 Tom R. Rudmik Computerized method of and system for learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5577919A (en) * 1991-04-08 1996-11-26 Collins; Deborah L. Method and apparatus for automated learning and performance evaluation
US5727950A (en) * 1996-05-22 1998-03-17 Netsage Corporation Agent based instruction system and method
US6212358B1 (en) * 1996-07-02 2001-04-03 Chi Fai Ho Learning system and method based on review
US6419496B1 (en) * 2000-03-28 2002-07-16 William Vaughan, Jr. Learning method
US6551109B1 (en) * 2000-09-13 2003-04-22 Tom R. Rudmik Computerized method of and system for learning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8107605B2 (en) 2007-09-26 2012-01-31 Hill-Rom Sas Memory aid for persons having memory loss
US10796596B2 (en) 2015-08-27 2020-10-06 Hrl Laboratories, Llc Closed-loop intervention control system
US10736561B2 (en) 2015-10-23 2020-08-11 Hrl Laboratories, Llc Neural model-based controller
WO2018065543A1 (en) * 2016-10-05 2018-04-12 Koninklijke Philips N.V. Apparatus and method of operating the apparatus to guide a user in a long-term memory training session
US10720076B1 (en) 2016-10-20 2020-07-21 Hrl Laboratories, Llc Closed-loop model-based controller for accelerating memory and skill acquisition
CN110023918A (en) * 2016-10-20 2019-07-16 赫尔实验室有限公司 The controller based on closed loop model for accelerating memory and technical ability to obtain
WO2018075132A1 (en) * 2016-10-20 2018-04-26 Hrl Laboratories, Llc A closed-loop model-based controller for accelerating memory and skill acquisition
CN110023918B (en) * 2016-10-20 2024-02-23 赫尔实验室有限公司 Closed-loop control system, medium, method for memory consolidation of a subject
US11344723B1 (en) 2016-10-24 2022-05-31 Hrl Laboratories, Llc System and method for decoding and behaviorally validating memory consolidation during sleep from EEG after waking experience
CN106777354A (en) * 2017-01-17 2017-05-31 腾讯科技(深圳)有限公司 Promotion message freshness determines method and device
CN110097484A (en) * 2019-04-28 2019-08-06 赵玉芝 It is a kind of to prevent the assisted class multicore forgotten driving memory engine
CN111861819A (en) * 2020-06-19 2020-10-30 北京国音红杉树教育科技有限公司 Method for evaluating user memory level in intelligent dictation and electronic equipment
CN111861819B (en) * 2020-06-19 2024-03-12 北京国音红杉树教育科技有限公司 Method for evaluating memory level of user in intelligent silently writing and electronic device

Also Published As

Publication number Publication date
AU2003210929A1 (en) 2003-09-02

Similar Documents

Publication Publication Date Title
CA2745993C (en) Electronic learning system
Sandberg et al. Mobile English learning: An evidence-based study with fifth graders
Edge et al. MemReflex: adaptive flashcards for mobile microlearning
US20130262365A1 (en) Educational system, method and program to adapt learning content based on predicted user reaction
US5827071A (en) Method, computer program product, and system for teaching or reinforcing information without requiring user initiation of a learning sequence
WO2017106832A1 (en) Method and apparatus for adaptive learning
US10909880B2 (en) Language learning system adapted to personalize language learning to individual users
US20070248938A1 (en) Method for teaching reading using systematic and adaptive word recognition training and system for realizing this method.
US20070011005A1 (en) Comprehension instruction system and method
US20060286538A1 (en) Interactive distributed processing learning system and method
WO2008096902A1 (en) Computer-implemented learning method and apparatus
WO2003050781A2 (en) System, apparatus and method for maximizing effectiveness and efficiency of learning, retaining and retrieving knowledge and skills
JP2020187151A (en) Learning assist system, control method of learning assist system, and learning assist program
WO2003067555A1 (en) A system and method to optimize human associative memory and to enhance human memory
Guerreiro et al. NavTap: a long term study with excluded blind users
Laughlin et al. Assistive technology: What physical educators need to know
WO2008027033A1 (en) A system and method to enhance human associative memory
KR101558529B1 (en) Method for memorization using differential hierarchy structure with user orientation
JP2010243662A (en) Remedial education support system, remedial education support method, and remedial education support program
US10074290B2 (en) Language training apparatus, method and computer program
JP2003162209A (en) Learning support method, learning support system and program
JP2022529543A (en) Brain science-based learning memory methods, systems, recording media
Hosea et al. The Impact of Implementing the Gamification Method in Learning Indonesian Sign Language with Bisindo Vocabulary
US20210121774A1 (en) Memory puzzle system
JP2788405B2 (en) Method and device for controlling a learning system for listening to foreign languages

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP