US20070128979A1 - Interactive Hi-Tech doll - Google Patents

Interactive Hi-Tech doll Download PDF

Info

Publication number
US20070128979A1
US20070128979A1 US11/602,882 US60288206A US2007128979A1 US 20070128979 A1 US20070128979 A1 US 20070128979A1 US 60288206 A US60288206 A US 60288206A US 2007128979 A1 US2007128979 A1 US 2007128979A1
Authority
US
United States
Prior art keywords
doll
electronic
player
mouth
verbal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/602,882
Inventor
Judith Shackelford
Adam Anderson
Jason Heller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
J SHACKELFORD ASSOCIATES LLC
J Shackelford Assoc LLC
Original Assignee
J Shackelford Assoc LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by J Shackelford Assoc LLC filed Critical J Shackelford Assoc LLC
Priority to US11/602,882 priority Critical patent/US20070128979A1/en
Assigned to J. SHACKELFORD ASSOCIATES LLC. reassignment J. SHACKELFORD ASSOCIATES LLC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANDERSON, ADAM MICHAEL, HELLER, JASON GENE, SHACKELFORD, JUDITH ANN
Publication of US20070128979A1 publication Critical patent/US20070128979A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/001Dolls simulating physiological processes, e.g. heartbeat, breathing or fever
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/28Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H3/00Dolls
    • A63H3/36Details; Accessories
    • A63H3/365Details; Accessories allowing a choice of facial features, e.g. to change the facial expression
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63HTOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
    • A63H2200/00Computerized interactive toys, e.g. dolls

Definitions

  • the present invention relates to high-tech dolls and, more particularly, to an interactive talking doll that in both speech and facial expression simulates the attitude and conversational capability of a very young child, chews food, asks questions, understands answers, sings songs, plays games and, all in all, is a great companion for a young child.
  • dolls One plaything that's endured over the years is the doll.
  • a young girl's parents will at some time purchase a doll as a gift for their daughter, fully expecting the gift to be received with appreciation. The parents are never disappointed. Often the girl's grandparents do the same for their granddaughter. They regard the play with dolls to be a wholesome activity and, importantly, an activity that's a good deal of entertainment for the child.
  • young girls in that doll play develop a tender, nurturing relationship with the doll, often mimicking their mother's behavior toward themselves in “mothering” the doll for hours on end and/or treating the doll as a friend or confidant.
  • a doll is a figure that in appearance looks like a person, usually a small girl or boy, and constitutes a plaything for a child, typically for a young girl.
  • that kind of plaything has de-facto expanded in definition in one direction to include small male figures, referred to as action figures, that are played with by young boys without the stigma of femininity (e.g. dolls for boys), and even virtual beings resident in an electronic game and visible to the child only as an image on the liquid crystal display of the electronic game
  • the scope of the doll category has also expanded in definition in another direction to include animals, cartoon characters and fantasy characters.
  • the term doll as used herein is intended to encompass all such forms and should so be understood when reading this specification and the described technology that improves all those structures. That's true, even when the preferred embodiment of the present invention, as will be seen, is of the form of a small child, as later herein described.
  • the doll contains a body or torso, a head and at least some appendages, such as arms and/or legs. Assuming the doll is to mimic a young female child, the arms, legs, torso and head are typically designed to appear natural and life-like in appearance and feel.
  • the '679 patent recognized various types of player inputs to the doll that resembled real life activities, such as feeding, grooming, playing, dressing and the like. That input was accomplished in part by inclusion of two sensors carried in the doll that, with the aid of an electronic controller in the doll, essentially recognized objects.
  • One sensor was located in the mouth of the doll and recognized insertion of various simulated fruits and vegetables into the mouth of the doll.
  • Each simulated fruit or vegetable carried an electrical resistance of a value that was unique to the object. That resistance was included in a series circuit between two electrical contacts on the simulated food object. By engaging electrical contacts carried on the sensor with the corresponding contacts on the simulated food object, the resistance is effectively placed in a resistance measuring circuit in the electronic controller in the doll.
  • the resistance measuring circuit measured the value of the electrical resistance between those contacts, thereby identifying the particular food by the resistance value assigned thereto by the doll designers, and the controller processed that information.
  • the second sensor carried on the back of the doll recognizes the insertion of (e.g. dressing of) an article of clothing on the doll.
  • a magnetic sensor a magnetic switch
  • the magnetic switch identified the presence of a hair brush that was moved across the hair of the doll by the child, when the magnet that was carried in the hair brush actuated the magnetic switch as the hairbrush was swept across the hair of the doll.
  • dolls and other character toys
  • the emulation uses sound effects and/or facial sculptures.
  • Those sculptures may be of plastic, vinyl, porcelain, soft sculpted fabric or plush, or other means to depict the emotion.
  • the face of the doll is sculpted to appear sad.
  • the child holding that doll may believe the doll character is unhappy, because the doll character has an appearance that, through experience, the child has learned is consistent with sadness or, in the case of an animal or other non-human form of doll, the character's anthropomorphic appearance of sadness.
  • Those actions are recognized by the child as being associated with sadness.
  • the illusion of sadness is reinforced if a crying or whimpering sound is introduced and broadcast from the loudspeaker on the doll.
  • Realism is further enhanced by introduction of mechanical animatronics, adding the opening and closing of the mouth of the doll, the opening and closing eyes/eyelids of the doll, and some additional movements of facial parts, such as eyebrows and lips that were somewhat synchronized to the audio sounds originating from the doll.
  • the robot's vision system consists of four color CCD cameras mounted on a stereo active vision head. Two wide field of view (fov) cameras are mounted centrally and move with respect to the head. These are 0.25 inch CCD lipstick cameras with 2.2 mm lenses manufactured by Elmo Corporation. They are used to decide what the robot should pay attention to, and to compute a distance estimate. There is also a camera mounted within the pupil of each eye. These are 0.5 inch CCD foveal cameras with an 8 mm focal length lenses, and are used for higher resolution post-attention processing, such as eye detection.
  • Bonding is the close personal relationship that forms between people, such as parent and child. The person in that relationship experiences emotional need with the other person and communicates that need through expressions of love to that person when present. Bonding is to express the need of the presence of the one to which the character is bonded. Additionally, no character doll is known that actually emulates the emotional bonding of the character doll to a child, emulating actual emotional attachment to the owner, player and/or child and a need for that person.
  • the emulation or illusion of bonding is achieved through combining, enabling the doll-to know the bonded person is present; animatronics, expressing emotional facial expressions regarding the feelings of the character for the bonded person, the make-believe mother, and emotional conversational interaction with.
  • animatronics expressing emotional facial expressions regarding the feelings of the character for the bonded person, the make-believe mother, and emotional conversational interaction with.
  • the doll may request something of the child and the child in response declines that request (e.g. says “no”) which the doll understands.
  • the demeanor of the doll changes to disappointment and sadness, which the child observes.
  • the child recognizes thereby that it has “hurt” the doll and is “touched” thereby in the heart. How can this action not produce an emotional bonding?
  • the doll invention is able to recognize accessories without requiring the electrical contacts and electrical resistors used in the preferred embodiment of the prior '679 patent.
  • RFID readers and RFID tags assist in accomplishing the function of identifying the accessory, without physical contact between the tag and reader. That structure is transparent to the child. As an advantage, the invention is more magical to a young child.
  • the sensor inside the doll's mouth required direct physical contact with the simulated food (piece) placed in the doll's mouth in order to make the appropriate eating or drinking sounds when the food was inserted in the mouth of the doll.
  • Using electrical resistors to simulate feeding an animatronic doll with a mouth that is undergoing the motion of chewing is quite difficult because of the need for the physical contact of the metal contact points connected to the resistor in the food accessory play piece and the two metal sensors in the mouth, when the mouth is moving.
  • the prior Amazing Amy doll hence, did not possess that capability.
  • the present invention is able to easily detect the particular food that is inserted in the mouth, even though the mouth is opening and closing.
  • a principal object of the present invention is to provide a interactive high-tech doll that is able to preoccupy the interest of a young child.
  • a further object is to provide an interactive doll that is able to recognize at least its mother's voice so the doll can know with whom the doll is conversing.
  • a still further object of the invention is to provide a doll that may in one embodiment interact with such things as singing songs and playing games with anyone or in other embodiments only with those whose voices the doll recognizes.
  • an additional object of the invention is to create a doll that's capable of carrying on a limited conversation, such as by asking a question and recognizing the answer given in reply, and, provide an additional response.
  • a high-tech doll produces human-like expressions, recognizes words, is able to play games, carry on a conversation, and express attitudes, emulating a living child.
  • the doll is capable of bonding to the child and, conversely, the child is able to bond to the doll.
  • the doll recognizes what the child says in response to prompts from the doll and is capable of carrying on a limited conversation with the child.
  • the doll exhibits facial expressions, produced by facial animators in synchronism with spoken words or separately, to effectively provide visible facial body language, representing any of happiness, sadness or grief, surprise, delight, curiosity, and so on to reinforce the emulation of a living child.
  • the doll recognizes the voice of the child at particular points in the doll's programming.
  • a preferred embodiment of the invention may include the feature of voice recognition, wherein the doll is trained to recognize a word (or words) spoken in the child's voice (and that word spoken in that voice, e.g. a voice print, is categorized by the doll as the “mother” of the doll), the feature of speech recognition of words spoken to the doll by the mother and any others, both of the foregoing features or both a limited amount of voice recognition and mostly speech recognition.
  • the doll either recognizes that the doll is being addressed by the pseudo-mother or some one else.
  • the doll creates a response to the child's verbal stimulus by answering the child with spoken words, usually accompanied by an appropriate facial expression or, alternatively, answers only with an appropriate facial expression as body language.
  • the child is taken in by the illusion that the doll is a living person.
  • the voice recognition system of the doll is able to voice print members of the “family” in addition to the mother, such as grandma, grandpa, uncle and aunt and other family members, is able to identify which family member is addressing the doll, and provide individually tailored verbal messages or other responses to the particular family member identified.
  • the doll is able to establish relationships with respective family members and those relationships are different in some respects to the relationship of the doll has with the bonded person, the pseudo-mother.
  • various simulated food accessories may be inserted in the mouth of the doll, the doll recognizes the food and the doll simulates chewing that food.
  • the food accessories contain RFID tags, electronically identifying the respective articles, while the doll includes RFID sensors, controlled by the controller of the doll, that reads the RFID tags carried by the simulated food products that are inserted into the mouth of the doll. That RFID tag reading is transparent to the child.
  • the doll may then broadcast spoken words indicative of knowledge of the particular simulated food product, further adding to the illusion of reality.
  • FIGS. 1A and 1B are respective pictorials of the preferred embodiments of the doll viewed from the front with the doll wearing a dress and the doll unclothed, respectively;
  • FIGS. 2A and 2B are like pictorials of the respective views of the doll of FIGS. 1A and 1B , respectively, as viewed from the rear;
  • FIG. 3 is a block diagram of the electronic package carried in the embodiment of FIG. 1 ;
  • FIG. 4 is a pictorial of a potty accessory for use with the embodiment of FIG. 1 ;
  • FIG. 5A is a block diagram illustrating the actuators located in the head of the doll for controlling movement and/or positioning of various parts of the doll face, such as the lips, jaw, cheeks, eyebrows and the like;
  • FIG. 5B is an exploded view of a preferred mechanism for controlling movement of various parts of the doll face incorporated in a practical embodiment of a doll in lieu of that of FIG. 5A ;
  • FIG. 5C is a side view of the mechanism of FIG. 5B as assembled viewed from one side and FIG. 5D is a side view of that mechanism from the opposite side;
  • FIG. 6A is a pictorial of a dress in which to clothe the embodiment of FIG. 1 illustrating the position of the associated RFID tag in that clothing article and
  • FIG. 6B is a pictorial of a nighty in which the embodiment of FIG. 1 is dressed for bed-time and showing the respective RFID tag location;
  • FIG. 7A illustrates various food articles that are recognized and used by the doll of FIG. 1 in play incorporating RFID tags with the respective articles;
  • FIG. 7B illustrates a dish accessory for play with the embodiment of FIG. 1 ;
  • FIG. 7C illustrates a toothbrush accessory used in play with the embodiment of FIG. 1 ;
  • FIGS. 8A through 8F illustrate a variety of facial expressions produced in the face of the doll embodiment of FIG. 1 during play.
  • FIGS. 1A and 1B are pictorial front views of one embodiment of the doll 10 , dressed and undressed, respectively, and FIGS. 2A and 2B illustrates those dolls from the rear.
  • doll 10 contains an outer shell or torso 12 and a head 14 .
  • That torso includes a number of parts that are attached together, such as sewn together with thread: Arms and legs of padded vinyl, as example, such as Kraton® based soft skin, or rotocast polyvinyl chloride (“PVC”) or the like that is typically stuffed with a filler material, as example, polyfil, and are shaped to resemble and flex like human skin and a short-sleeved and short-pant sweater-like portion 11 of fabric material that is soft and attaches to one end of the arms and legs.
  • padded vinyl as example, such as Kraton® based soft skin, or rotocast polyvinyl chloride (“PVC”) or the like that is typically stuffed with a filler material, as example, polyfil, and are shaped to resemble and flex like human skin and a short-sleeved and short-pant sweater-like portion 11 of fabric material that is soft and attaches to one end of the arms and legs.
  • PVC polyvinyl chloride
  • the head 14 of the doll is formed of a hard stiff plastic skull that is covered by a molded-to-shape skin that's formed from a soft-to-the-feel thermoplastic elastomer (TPE) material or the like that is molded to shape to fit over, overlie, the plastic skull (not visible in the figure) and contains openings for the eyes and mouth of the doll that are included in the underlying skull.
  • the covering skin contains a head of hair 13 , preferably formed of strands of Nylon, that's fastened to the skin.
  • the doll head 14 is attached by a neck portion indirectly to the torso and is carried by that torso.
  • torso 12 of the doll is hollow and contains a plastic box 18 , represented in dash lines, that houses the active electronic components, later more fully described, as well as a surround of bulk padding material to fill remaining space. Access to the interior is made by opening the Velcro-closed seam 19 on the backside of the doll in the sweater-like portion 11 of the torso to expose a cover panel 16 of the plastic torso box and allow access to the battery compartment should battery replacement become necessary. That seam is closed by a Velcro® fastener, not illustrated, that closes off access to the internal hardware and electronics, but which may be repeatedly pulled open and then re-closed.
  • doll 10 and associated appendages are not particularly important conceptually to the invention, as becomes apparent from the description, but in this embodiment, preferably, conforms with the proportions and appendages of a child. Due to the limpness of the sweater-like material 11 , the attached appendages may be manually placed in a variety of positions, wherein the doll may be sitting or lying down. In other more complex embodiments the doll may be articulated if desired, either fully or partially, and/or have arms, legs, and the torso molded instead of partially made of fabric and, if so, may be placed in a greater variety of positions, including a standing or walking position.
  • the overall size and weight of the torso 12 is shaped and sized to allow the player (i.e., typically a young child) to comfortably hold the doll in the child's arms without being unduly burdensome.
  • the torso and head may be shaped as shown in the drawings, and may have an overall dimension (height ⁇ width ⁇ thickness) about 20′′ ⁇ 7′′ ⁇ 3′′ (inches).
  • the electronics package 18 represented in a dash-line block in FIG. 1B and in a more detailed block diagram in FIG. 3 is referred to by that latter figure.
  • the package includes a programmable electronic controller 20 , the doll controller.
  • the doll controller is a programmed general purpose electronic processor or microprocessor (or microcomputer), also known as a microcontroller, preferably in the form of a single integrated circuit (“IC”) chip, which integrates a speech-optimized digital to analog converter (DAC) and an analog to digital converter (ADC) into a single IC chip capable of accurate speech recognition as well as high quality low data-rate compressed speech.
  • IC integrated circuit
  • the IC preferably provides the further on-chip integration of features, including a microphone pre-amplifier, twin-DMA units, vector accelerator, hardware multiplier, three timers and 4.8 kbytes of RAM.
  • ROM read-only memory
  • Controller 20 includes various inputs 26 and outputs 28 and is powered by an external battery 30 , which may be rechargeable and/or replaceable. Electrical current from the battery is supplied through the main power switch 31 , an on-off type switch. Switch 27 is a reset switch, which may be a momentary operate type switch.
  • the controller may include access to external or add-on memory 22 provided on a separate memory chip. and electrically erasable programmable read only memory (“EEPROM”), not separately illustrated, which retains the memory contents even when electrical power is removed from the circuit.
  • EEPROM electrically erasable programmable read only memory
  • the inputs and outputs are electrically connected to various sensors and other components, such as the microphone and loudspeaker, later herein described, by electrical conductors or cabling, not illustrated in the figure to preserve clarity.
  • the controller is programmed to perform the various actions described herein, including the utility functions of a clock-calendar, speech synthesis and the process of speaker independent (speech) recognition and dependent (voice recognition). Although those and other utilities are included as stages in the programmed microcontroller 20 , as an aid to understanding of the invention those functions may be separately represented in the figure in dash line blocks associated with the programmed microcontroller 20 with speech and voice recognition illustrated as block 25 , and electronic clock-calendar as block 24
  • Loudspeaker 39 also installed in the chest of the doll, connects to an output 28 of the controller 20 .
  • the loudspeaker converts any verbal information contained in the electrical signal that is output from controller 20 to an audible sound, suitably an analog sound, and acoustically broadcasts that sound to those near the doll who would be listening.
  • output from the controller may include the function of and/or be supplied from a digital-to-analog converter. If desired an amplifier or other sound reproducer may be coupled in between the digital to analog converter and the loudspeaker for enhanced sound.
  • a microphone also installed in the chest of the doll, connects to an input of controller 20 .
  • the doll “listens” for an answer to a particular prompt or question that the doll has spoken.
  • the microphone receives responses and sound from those persons who are nearby, and converts any audible verbal information or other sounds contained in the analog speech output from those nearby into a digital signal, and inputs that signal to controller 20 .
  • the speech recognition technology, programming, algorithms, and recognition nets/sets compares the converted analog, now digital, information received with the digital information that the doll was programmed to anticipate receiving and sends to controller 20 a “yes,” indicating the information expected was received, or a “no,” indicating the information received was unexpected.
  • the programming causes the doll to react verbally and animatronically by means of facial expression, when the word is one that was expected by the programming, or to a different reaction when the word was one that was unexpected or unrecognized, as the case may be,
  • the doll appears to speak like a person.
  • the doll issues verbal instructions and other information or requests to the child player by essentially speaking those instructions, which are played, that is, broadcast through the loudspeaker.
  • the face of the doll is also animated:
  • the mouth of the doll opens and closes, the eyelids may flutter and so on like a live person.
  • the doll may also prompt the player at times during the course of an interactive conversation to respond with one of several possible words or phrases that the doll (or anyone else) might anticipate hearing in reply and, when recognized by the doll, responds with an intelligent reply or behavior.
  • the various sensors in the doll include a mouth sensor 32 , located inside the doll mouth, that aids in identification of foods (and other objects) that are placed in the mouth of the doll; a hug sensor 33 (e.g. a push button switch), located in the front mid-section of the doll, that aids in determining if the doll is being hugged; a butt sensor 34 , located in the rear buttock of the doll that aids in determining when the doll is seated on an accessory, such as seated on the simulated toilet 40 (of FIG.
  • a mouth sensor 32 located inside the doll mouth, that aids in identification of foods (and other objects) that are placed in the mouth of the doll
  • a hug sensor 33 e.g. a push button switch
  • a butt sensor 34 located in the rear buttock of the doll that aids in determining when the doll is seated on an accessory, such as seated on the simulated toilet 40 (of FIG.
  • a clothing sensor 35 located beneath the lower neck, that aids in identifying the clothing worn by the doll
  • a brush sensor 37 located at the front top of the head of the doll, that aids in detecting whether the hair of the doll is being brushed.
  • the latter sensor may be in the form of a magnetic switch.
  • the audio sensor 36 a microphone, located in the front chest of the doll that detects sound and converts sound to an electrical signal and thus aids in the electronic recognition of particular sounds and verbal information.
  • a force sensor 38 is included in the right hand of the doll to detect if the right hand is being squeezed.
  • Another force sensor 38 B may be included in the left hand to detect if the left hand is being squeezed.
  • Mouth sensor 32 , butt sensor 34 and clothing sensor 35 are formed of radio frequency identification tag readers (“RFID reader”), a technology that's previously found application in inventory control and in supermarket applications.
  • RFID reader radio frequency identification tag readers
  • the terms sensor for RFID tags and RFID reader and RFID tag reader (that essentially defines the respective sensor) are synonymous terms.
  • the RFID tag reader detects RFID tags that are placed on or in products. Those tags contain encoded information about the product to which the tag is attached. Such tags may be “smart” in which the information on the tag may be changed (e.g. reprogrammed) or “dumb,” in which the information does not change, the latter being the kind preferred in the present application.
  • RFID tag reader 32 is installed in the upper roof of the mouth of the doll. That reader is coupled, wired, to an input and an output, 26 and 28 , of controller 20 ( FIG. 3 ), permitting the controller through the tag reader to interrogate an RFID tag during the course of the system program of controller 20 and receive the output, such as the identification information contained in an RFID tag (not illustrated) that is properly oriented to and positioned in the doll mouth adjacent reader 32 .
  • the RFID tag reader is able to pick up information from the RFID tag on the food item even though that reader may be spaced by as much as an inch from the food item.
  • the RFID reader inside the doll's mouth is still able to read the RFID tag carried by the food accessory and glean the identity of the food accessory.
  • the inputs and outputs of the second RFID reader 34 installed in the lower back of the doll, specifically in the butt of the doll, are also coupled, that is connected by an electrical conductor or wired, respectively, to respective output and input, 28 and 26 , respectively, of controller 20 .
  • the second RFID reader permits the controller to interrogate an RFID tag during the running of the system program by the controller and receive the output, such as the identification information contained in an RFID tag 34 ′ that is installed within, as example, an accessory seat, such as a toilet seat of a child's potty 40 , pictorially illustrated in FIG. 4 .
  • the tag is read when the RFID tag in the seat is properly oriented and positioned relative to RFID reader 34 in the butt region of the doll while the doll is in the seated position on the accessory toilet seat.
  • the third RFID reader 35 installed in the back of the upper right shoulder or neck of the doll, is similarly electrically connected to an input and output of controller 20 so that any RFID tag that is positioned closely adjacent to and properly oriented parallel to the reader, such as occurs when a tagged clothing article of clothing is worn by the doll, may be polled for information by the controller when required by the program during the course of the doll's activity during play. As a result, the controller is able to identify the particular clothing article that is being worn.
  • One article of clothing may be a dress 41 and another a nighty 43 , as respectively pictorially illustrated from the rear in FIGS. 6A and 6B to which reference is made.
  • Dress 41 includes the identifying RFID tag 42 on the upper side by the collar of the dress so that the tag aligns with reader 35 when the dress is properly fitted on the doll.
  • nighty 43 includes the identifying RFID tag 44 that is interrogated by reader 35 when the nighty is properly fitted on the doll.
  • the dress that appears on the doll in FIG. 1A is another article of clothing that may carry an RFID tag in the foregoing manner.
  • switch 23 serves as a position or movement sensor and is of any conventional design.
  • the switch actuates if the doll is lifted or moved, causing a small metal ball inside the switch to move inside a metal cylinder, closing a switch at one end or the other of the metal cylinder.
  • Switch actuation thereby provides a signal to doll controller 20 which the controller is programmed to interpret whether the doll is lying down, sitting up or is upside down and take the appropriate next steps called for by the programming, such as later herein described.
  • the microphone 36 is output to an input 26 of controller 20 through which any audio picked up by the microphone is processed by the voice and/or speech recognition program 25 of the controller.
  • the microphone picks up verbal information and applies that information via controller 20 as an electric signal to voice recognition processing.
  • the foregoing function may be assisted by an analog-to-digital converter, a known device, if one desires or finds need to apply digital signals to the controller, and then is processed by the voice speech recognition program 25 .
  • voice recognition speaker dependent speech recognition
  • This type of technology creates a “voice print” by requesting the user to speak a particular word multiple times, usually twice. That speaker dependent word now stored in memory is user specific, not word specific. That difference distinguishes what is herein called voice recognition from speech recognition.
  • Voice recognition technology enables identification of a specific word (or words) spoken by a specific user.
  • speech recognition enables identification of a specific word (or words) of a particular language and dialect, such as American English spoken spoken by anyone, and is thus speaker independent, depending on how that speech recognition technology is configured.
  • the recognition algorithm of the program detects whether the voice is one that was programmed into the doll during the setup procedure, that is, for which the program stored a voice print, and, when the recognition is confirmed, allows the controller to recognize the word or words that were used to make the voice print when spoken in that voice.
  • Speech recognition is quite common nowadays, and is widely used in equipment. As example, speech recognition is used by the telephone company computers to receive (and respond to) customer queries for service and the like over the telephone. Speech recognition is also found in PC programs for the retail market like ScanSoft's Dragon Naturally Speaking® program and ViaVoice® program of the IBM company.
  • controller 20 for a practical embodiment of the invention is the controller chip that is available from Sensory, Inc. as model RSC 4128, a microcontroller chip with, among other conventional elements, 128 kilobits onboard read-only memory (“ROM”) to store data and an external memory interface, enabling interface with additional memory 22 that may be desired to store a greater number of words, phrases, sentences, facial expression tables and programming, speech recognition nets, speech recognition programming of the RSC 4128, and the programming of the logic and resultant behavior of the doll. If desired, a separate chip of flash memory can be added to separately record speaker-dependent voice recognition.
  • ROM read-only memory
  • FIG. 7A pictorially illustrates a number of the food accessories that the doll may be programmed to recognize when the accessory is inserted in the mouth of the doll, including a cookie 51 , milk bottle 53 , carrot 54 , pizza 50 , pancakes 58 , and juice 59 .
  • a baby's feeding bowl 55 contains three separate areas for holding three different spoons with each area holding a small portion of the simulated food in the associated spoon. In other words, one spoon contains spaghetti 56 , another contains macaroni 57 and the third contains cereal 52 .
  • the simulated food that is held in a respective spoon may also be recognized when inserted in the mouth of the doll. Additionally, as shown in FIG.
  • a toothbrush 60 may be recognized when inserted into the mouth of the doll after the doll requests the player to brush the teeth of the doll following the partaking of food or before bedtime.
  • the foregoing accessories are suitably fabricated of plastic that is decorated with the appropriate visual appearance of the respective food product that's being simulated and/or implement that the child recognizes (or learns what the accessory is intended to represent).
  • an RFID tag 51 ′ is embedded in the plastic of the simulated cookie accessory and contains the information that identifies the accessory. Like tags are also illustrated in dash lines in the other accessories shown in the figure (and those in FIGS. 7B and 7C as well), but are not labeled.
  • the RFID tag in a respective accessory is uniquely configured to enable the RFID reader inside the doll to wirelessly identify the unique configuration of information in the tag and communicate that information to the doll controller 20 .
  • the controller is able to associate each unique configuration with a particular one of the accessories for the doll, and thus let the doll “know” the identity of the accessory inside the mouth of the doll, what the doll is wearing, or on the object which the doll is seated, and so on.
  • the foregoing identification is deferred until the program run by controller 20 of the doll achieves a state at which the program requires the reader to determine if the sensor is detecting an RFID tag. Until that stage in the program is reached, the doll does not make any kind of verbal or behavioral indication of the identification.
  • RFID reader 32 is approximately one inch square in size and includes a small antenna, not illustrated, that is mounted parallel to and just above the vinyl skin in the top (roof) of the mouth of the doll.
  • the tag In order for the reader to read the RFID tag, the tag should be located directly below (e.g. adjacent) and parallel to the antenna in the RFID reader.
  • Reader 32 senses the food accessory (e.g. senses the RFID tag in that accessory) when the accessory is inserted into the mouth of the doll. Because the very small sized reader 32 is located in the mouth of the doll, it is now possible to move the lips of the doll (varying the distance between the RFID tag and the tag reader) to simulate chewing when simulated food, such as those items illustrated in FIG. 7A , are inserted in the mouth of the doll.
  • the RFID tag always remains in RF range, even though the mouth is open wide or is moving. Where the particular food article is especially thick, too thick for insertion into the mouth of the doll, then a portion of that article is formed with a reduced thickness portion decorated to simulate a piece of the food article that was previously eaten. That reduced thickness portion fits inside the mouth of the doll and contains the RFID tag. Cookie 51 , and pancake 58 in FIG. 7A are shown to contain a simulated partially eaten portion.
  • the mechanical mouth movement that emulates chewing (and sucking or sipping) does not interfere with the RFID reader's ability to read the identification of the food accessory, a novel feature.
  • the identification of the simulated food that's inserted in the mouth of the doll is transparent to the young child. That transparency reinforces the illusion that the doll is actually a live person or is magical.
  • the small size of the RFID reader 35 in the doll's back more specifically, located at the back beneath the neck of the doll, enables the RFID tag on the clothing to be read transparently once the clothing is placed on the doll properly and the Velcro attachment on the clothing is closed to hold the RFID tag directly over the RFID reader.
  • the RFID reader 34 in the doll's butt enables the controller to know (e.g. detect) when the doll is actually seated on an accessory that contains an RFID tag, and identify that accessory. This is particularly important in the potty accessory 40 of FIG. 4 .
  • the illusion that needs to be conveyed to the child, consistent with the illusion that the doll is alive, is that the doll knows whether or not she is sitting on the potty and that “she did it!” if a pee-pee or poo-poo sound occurred while the doll was so seated. The face of the doll in that event will present concurrently a satisfied demeanor.
  • the doll had a simulated bowel movement or urination before reaching the potty, then the doll had an “accident” and should be a little upset at her mistake.
  • the demeanor of the doll face concurrently would be changed by the facial control actuators to represent that the doll is upset.
  • the facial expressions of the doll head are varied and controllable. Those expressions include those that accompany natural speech, referred to as natural speech expressions; a smile, chewing expressions, listening, sleepiness, yawning, surprise, unhappiness, crying and excited.
  • Actuators such as electromechanical or electromagnetic actuators, carried in the doll head and controlled by doll controller 20 are coupled to locations in the head of the doll that control the movement of the flexible lips of the doll's mouth, the vertical spread of the jaw, hence, lips and mouth of the doll, the smiling (upturn) or unhappy (downturn) of the right and left ends of the lips of the doll, the upward movement of the cheek of the doll while the mouth of the doll is concurrently positioned in a smile, the upward movement of the inner portion of both eyebrows of the doll as the doll eyes open in an expression of wonderment or surprise, the projection of the lower lip of the doll mouth downward and forward in a pout, the slow opening of the mouth combined with sleepily closing eyelids in the expression of a yawn, the fluttering eyelids on wakening of the doll, the eyebrow and the eyeballs of both eyes of the doll.
  • the actuator that opens and closes the doll mouth can be called a mouth actuator; and, collectively, the actuators for the face of the doll may be said to constitute a facial actuator.
  • the foregoing actuator arrangement is pictorially illustrated in FIG. 5A to which reference is made.
  • Head 14 contains actuators 61 , 63 and 65 for controlling the shape and position of the lips 32 of the doll.
  • the actuators are coupled to a respective output 28 of controller 20 , not illustrated in this figure, and are either energized or deenergized by the computer in accordance with the computer application program that is being “run.”
  • Actuators 61 and 63 when energized, pull the ends of the lips, which effectively changes the appearance of the mouth from a normal one to a wide one. When de-energized, the skin material, which is elastic, restores to the normal size, and the mouth returns to a normal appearance.
  • Actuator 65 when energized by the controller spreads the upper and lower lips apart (e.g.
  • actuator 65 essentially serves as a mouth control actuator.
  • Actuators 67 and 67 are coupled to respective outputs 28 of the controller and respectively control the left and right cheeks of the doll, which are formed in the elastic skin of the doll face. When energized by the controller, the actuators move the cheeks upward. When de-energized, the cheeks restore to the normal position.
  • actuators 61 and 63 spread the lips to a wide position and actuators 67 and 69 move the cheeks upward. Those movements produce a smile.
  • the actuators may be bi-directional in which case the controller program directs the movement and direction of the respective actuator.
  • the actuators or some of the actuators may be a unidirectional type with a spring that moves in one direction and tensions the spring (or the elasticity of the doll skin) when energized and when deenergized the actuator is returned to the initial position by the energy stored in the spring.
  • Actuators 71 and 73 are coupled to respective outputs 28 of the controller and respectively control the right and left eyelids of the doll to thereby open (or close) the eyes of the doll.
  • the eyelid is attached to the eye of the doll, a spherical member, and covers a circumferential portion of that sphere.
  • a shaft mounts that sphere in the eye socket in the doll face for rotational (e.g. pivotal) movement about an axis defined by the shaft.
  • the latter actuators produce a rotational movement when energized.
  • actuators 71 and 73 When energized, actuators 71 and 73 essentially cause the doll to open the doll's eyes.
  • Actuators 75 and 77 are also coupled to a respective output of the controller and simultaneously control the right and left eyebrows of the doll, respectively.
  • actuators 75 and 77 push (or pull) on the elastic skin of the doll face to slightly stretch the skin. Since the skin carries the image of the eyebrows, stretching the skin effectively raises the eyebrows.
  • the skin contracts and resiliently restores to the normal condition, moving the eyebrows back to the normal position.
  • the dolls mouth would be opened, the eyes would be wide open and the eyebrows would be raised, a familiar appearance of a surprised person.
  • controller 20 indirectly detects the food article.
  • the controller is programmed to have the doll, among other actions, taste and chew the food.
  • the controller is further programmed with an expression control module or subsidiary program, and one operational routine in that control module or program is the act of chewing.
  • the controller issues voice messages that broadcast chewing sounds from the loudspeaker 39 , concurrently with issuing commands to the mouth actuator 65 and lip actuators, 61 and 63 , to move the jaws or lips to simulate the facial expressions that normally accompanies chewing by a live person.
  • the two lips come together and/or the lower jaw moves up and down repeatedly in a chewing cycle, if the food is potato chips or apples, the doll concurrently broadcasts a “munch” sound through the loudspeaker.
  • FIGS. 8 a through 8 f The actuator arrangement that is coupled to the skin of the doll face should be capable of contorting the mouth, eyelids, cheeks and mouth to produce various expressions. Some of those expressions are illustrated, such as expressions normally associated with yawning ( FIG. 8 a ), sleepiness, attentive listening, feigned smile ( FIG. 8 b ), surprise ( FIG. 8 c ), unhappiness, crying ( FIG. 8 d ), excited speech ( FIG. 8 e ), chewing, and waking with fluttering eyelids ( FIG. 8 f ).
  • the eyelids ( FIG. 8 f ) of the doll can also be fluttered by the actuators, an action that may occur when a child is awakening.
  • a control system that uses such individual electromagnetic actuators is more expensive at present than desired.
  • Such a control system would necessitate a price for the resultant doll that at the present time is so high at both wholesale and retail that the doll would not sell to buyers or would have very limited distribution and low volume sales. Therefore, a specific mechanism was developed for the doll by others to whom the requirements for construction of a practical embodiment of the invention was made known; and that mechanism proved less complicated and less expensive than individual actuators for a practical embodiment.
  • the device was also slightly less versatile. That device used two motors to move gears and/or a number of cams that could accomplish the foregoing facial movements within acceptable limits.
  • the face of the doll is formed of a flexible, but strong, thermoplastic elastomeric material, a rubber like plastic material that presents a human-like skin in feel and appearance. That material forms a skin that fits over and overlays a stiff plastic skull and contains suitable openings in the skin to accommodate the mouth and eyes carried in the skull, and the skull houses the controller-controlled actuator mechanism.
  • the actuator mechanism should be able to move or distort that skin so as to produce facial expressions on the doll that appear realistic.
  • This thermoplastic elastomeric skin is injection molded to shape and when formed stretches over and onto the rigid plastic skull.
  • small rigid plastic connector tabs are insert molded into the interior of the of the foregoing skull skin. Those connector tabs connect certain parts of the skin to respective actuators, enabling the actuators, when operated, to more positively push or pull the skin to change the appearance of the doll face.
  • the mechanism is achieved with two motors and a double cam mechanism later herein more fully illustrated and described.
  • That design includes two double sided cams, each operated by a single motor that is able to rotate the cams clockwise or counterclockwise.
  • the cams in combination with two gearboxes operating them, move levers that attach to respective ones of the rigid insert molded plastic connectors that are molded into the inside surface of the skin material that covers the skull and forms the doll's face.
  • By strategically configured placement and shape of the cam surfaces and the positions in the skin at which the actuator connectors are specifically molded into the skin in order to take advantage of this actuator, enables facial movements of the strategically selected portions of the doll's face. Those facial movements achieve far more realistic human movements in the applicant's view than any doll ever made or marketed previously.
  • each of the two motors is bi-directional and is capable of rotating either clockwise or counterclockwise (forward or reverse), the motor speed could change to rotate fast or slow and the duration of the rotation of the motor shaft occurs as directed by controller 20 ( FIG. 3 ).
  • the direction, speed and duration of rotation for each facial movement is determined by the doll controller and that in turn is controlled by the programming logic and game flow directing the actions.
  • the motors connect to gears and/or cams and/or levers that rotates the eyeballs of the doll to a closed position as the lower jaw of the doll is moved downwards and then immediately rotates the eyeballs to the open position and the jaw upwards. That action produces “blinking” of the eyes in a natural manner while the doll is “talking.”
  • the controller causes the doll eyes to blink repetitively simulating the natural eyelid movement of a living person who is in the course of speaking.
  • the motors could also open the eyelids and eyes of the doll to a wide open position and concurrently move the eyebrows of the doll upward to simulate a facial expression of surprise; could also lower the eyelids of the doll slowly while concurrently widening the mouth of the doll to simulate an expression of yawning; or close the eyelids and simulate the appearance of sleeping.
  • the motors could also lift the cheeks of the face of the doll upwards and backwards and the corners of the mouth of the doll upwards and back to produce a smile on the doll face in a very human-like manner or turn the corners of the mouth of the doll down and or protrude the lower lip so that the doll face gives the expression of unhappiness or even appear to pout.
  • FIG. 5B illustrates the mechanism in an exploded view of the doll head from which the covering custom elastomeric skin is omitted.
  • the front and rear head plates of rigid plastic, 100 and 101 join together to define an internal region in which the remaining parts illustrated in the figure are housed.
  • the front head plate contains the various openings or windows for the eyes, cheeks, nose, and mouth, as illustrated.
  • the two cams 103 and 104 are mounted side-by-side in the head for rotation about a common axis.
  • Each cam is formed of a molded stiff plastic that is circular and generally flat in shape, disk-like, with the front and back faces of that disk containing at least one, and possibly two cam tracks.
  • Each cam track is formed between a parallel pair of raised ridges located on the face of the disk, that simulates and is equivalent to a grooved track.
  • Those cam tracks are strategically configured irregular ovoid/elliptic shaped tracks about the center of the disk.
  • the cam track controls the somewhat lateral position of a cam follower, formed of a short round peg or lever, as a function of the rotational position of the cam.
  • One or more such cam-track is included on each of the two faces of each cam, only one of which faces on each cam is illustrated in the figure.
  • Cam 103 controls the movement and change of position of the eyes of the doll by means of the cam-track on one face of the cam and controls the eyebrows of the doll by means of the cam-track on the opposite face of that cam. Therefore cam 103 is sometimes referred to herein as the eye and eyebrow cam.
  • Cam 104 controls the movement and change of the position of the mouth of the doll by means of the cam-track on one face of the cam and controls the cheeks of the doll by means of the cam-track on the opposite face of the cam. Therefore cam 104 is sometimes referred to herein as the mouth and cheek cam.
  • the doll head contains a pair of eyes 105 , an eyebrow actuator connector 106 , and an eye lever 107 .
  • the mouth of the doll is represented by structure 108 .
  • the head further contains additional eye levers 119 and 120 .
  • Eye lever 119 is connected to and follows a cam-track in cam 103 .
  • a pair of cheek actuators 109 connects to a cheek actuator connector 110 .
  • a cheek lever 111 connects to cheek connector 110 .
  • a battery operated DC motor 112 drives cam 104 , indirectly, through the appropriate set of three gears 113 , with power being supplied to the motor under control of the controller and, hence, the program being run by the controller.
  • the motor attaches to a motor casing 114 , enclosing the gears 113 and that casing includes a rotating joint 115 driven by the lower gear in the figure that rotates the shaft that drives cam 104 .
  • the doll head also includes a motor housing 116 that houses a second DC motor, not illustrated, that drives cam 103 , indirectly, through an appropriate set of gears, also not illustrated.
  • Motor housing 116 includes a rotatable shaft 117 that is exposed on the far side.
  • a wiper 118 is placed in abutment with shaft 117 and defines a shaft position indicator that senses position where the metal contacts of the wiper contact metal contacts that rotate with the shaft. Through appropriate wiring, not illustrated, wiper 118 reports the shaft position to the controller. That information or feedback is useful to enable the controller to properly turn the cams so that the mouth of the doll is opened wide in the case of expressing surprise or is opening slightly when the doll is to express a smile.
  • the mechanism includes eye levers 119 and 120 ; second and third cheek levers 121 and 122 , in addition to cheek lever 111 , earlier noted; and mouth levers 123 and 124 . Mouth extenders 125 and 126 , mouth levers 127 , 128 and 129 , and an additional mouth extender 130 . A lever plate 131 is included to support the aforementioned levers.
  • the strangely bent strip shown to the left of cam disk 104 constitutes the main crank 132 .
  • Casing 133 closes the rear end of the mechanism.
  • FIG. 5C shows many of the same elements as viewed from the same side as the exploded view of FIG. 5B , but assembled together, and FIG. 5D shows the elements as viewed from the opposite side of the skeletal head.
  • cheek lever 122 and mouth lever 123 are coupled to a cam-track in cam 103 .
  • those cam tracks are formed by parallel ridges formed on the sides of the disk-like cam that extend a small distance from the otherwise flat side of the disk. Forward motion on lever 123 pushes the cheek actuators 109 (in FIG.
  • the mouth lever 123 likewise follows a cam-track in the cam disk and through the additional mouth levers 124 , mouth extenders 125 , 126 and 130 , and mouth levers 127 , 128 and 129 operates the mouth of the doll, not all of which are labeled in FIGS. 5C and 5D .
  • the foregoing mechanism basically accomplishes essentially the same functions as one that relies on the electromagnetic actuators that were described in connection with FIG. 5A , but is believed less expensive to fabricate.
  • That double motor double cam mechanism is found to possess the capability of accomplishing the proper control of the “facial muscles” of the doll face to produce the contortions that are appropriate to each of the emotions above noted, such as those illustrated in FIGS. 8 a through 8 f , including, but not limited to flutter, yawn, sleepy, listen, smile, chew and speech, surprise, unhappiness, crying, excited speech.
  • the strategic configuration of the irregular elliptic cam-track on the eye movement face of cam 103 and on cam-track for the eyebrow movement on the opposite face of cam 103 create three appropriate positions for both the eye and eyebrows as cam 103 rotates forward or backwards, seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the eyes and eyebrows from one position to another.
  • the strategic configuration of the irregular elliptic cam-track on the cheek movement face of cam 104 , and on the mouth movement opposite face of cam 104 create appropriate positions for both the cheeks and the mouth as cam 104 rotates forward (or backwards) seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the cheeks and mouth from one position to another.
  • the movement of the eyes may occur concurrently with movement of the eyebrows or independently of eyebrow movement.
  • the mechanism is mounted in the head of the doll and the electrical inputs to that mechanism are connected to the appropriate outputs of the controller 20 .
  • the doll is able to produce emotional expressions that surpass anything previously attempted.
  • controller 20 is preferably implemented in the form of a battery operated programmable microprocessor or microcontroller, as variously termed, and associated memory, including voice ROM, and a digital-to-analog converter and appropriate input and output interface circuits.
  • the microcontroller may also include an analog-to-digital converter and digital filters.
  • the foregoing may be implemented in a custom semiconductor integrated circuit chip, although separate chips may be used as an alternative, all of which are known and have appeared heretofore in interactive toys.
  • the digital clock need not be a separate unit as earlier described, but instead, is also integrally formed on the chip.
  • the chip's inputs are respectively connected to respective sensors (and digital clock) described and its respective outputs to the loudspeaker.
  • the micro-controller is programmed in accordance with the foregoing description and that program, the software, is stored in another portion of non-volatile memory or ROM.
  • the doll operates entirely on power supplied by batteries, that is, is a self-contained battery operated unit.
  • the electrical battery (or batteries) 30 are initially inserted in the battery compartment inside the doll and the power switch is set to “on”, the right hand of the doll is squeezed actuating switch 38 and the programmed set-up procedure for the doll commences. A few moments thereafter, the doll voices a yawn, contorts the doll face accordingly, and introduces herself as Amanda. The doll then asks the player to say the word “pizza.” When spoken, the word is recognized by doll.
  • the doll confirms that the doll heard the player state “pizza” and then asks the player to say the word “spaghetti.” When spoken by the player, that word is also recognized by the doll and the doll confirms aloud that it heard the word “spaghetti.”
  • voice recognition software analyzing the spoken words “pizza” and “spaghetti” and storing the analysis in memory as a code or pattern, the doll attaches the personage of “mom” to that analysis. In that way the child player becomes a de facto “mom.”
  • the doll When powered on for the first time, or when the mother receives a special command from the doll, the doll interrogates the mother in the most natural way possible in order to determine (and set) the current date and time on the clock of the controller; whether the doll should observe daylight savings time, the wake-up time for the doll, and bedtime. Each time an answer is given by the child, that answer is verbally confirmed by the doll before proceeding. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll.
  • the doll first speaks a particular year, such as 2005, and then asks the player if the year that was spoken was correct. If the player answers negative, the doll tells the player to squeeze the right hand of the doll and release that hand from the squeeze only when the correct year is spoken. The doll then broadcasts the various years in serial order. When the year 2005 is spoken, the player releases the right hand and the set-up procedure continues. If the player makes a mistake, the player is able to note the mistake when the doll tells the player the year and asks for verbal confirmation from the player, a “yes” or a “no” is spoken. If negative, the doll repeats the entire procedure, until a correct year is confirmed by the player.
  • the doll recites the name of a month and asks if that is the correct current month, requesting an answer of either “yes” or “no.” If the player states “yes,” the answer is recognized and the set up procedure next addresses the date in the month; if “no” the doll recognizes that and commences to repeat the same procedure described above for an incorrect year, but for an incorrect month instead, which includes squeezing the right hand of the doll. The doll then speaks the date number for the day of the month and requests an answer from the player. The protocol for settling upon a correct date is the same as described for the year date and the month. Once the correct date is settled, the doll makes a statement of the correct date: It is now Nov. 4, 2005, as example.
  • the program then moves to stating a wake-up time and a sleep time, which is negotiated with the player and settled upon. Alternatively, The program moves to stating a wake-up time and a sleep time and gives the player the option to choose those times or skip doing so. If the player wishes to select a wake-up and/or sleep time, then those times of day are selected in the same manner previously described herein for selection of the year, month and day.
  • the foregoing information is programmed into the programmable EEPROM (electrically programmable read only) memory during the set-up procedure and is retained even when the doll is powered off.
  • the clock function continues running so long as the power switch to the DC power supply remains in the power-on position.
  • the calendar information that is time, day, month and year as of the time the power was turned off are saved in memory.
  • the doll program causes the microcontroller to run a brief set-up procedure that enables the user to reset the time from the prior time that was saved. That information may be edited or changed, however, by actuating a reset key 27 , shown in FIG. 3 .
  • the doll may verbally query the child to speak the name of the doll so that the doll may obtain the required voice sample of the child to identify the child as the mommy. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll. Many variations can be made in the set up procedure.
  • the doll broadcasts a question through the speaker 39 and asks: “Say my name mommy” and may state that request twice. The doll will know when her make-believe “mommy” states the name of the doll, which in the preferred embodiment is Amanda.
  • the voice recognition software within the controller 20 e.g.
  • virtual recognition process 25 analyzes the response and produces an electronic voice pattern of the “mommy.”
  • the doll may ask again and with the additional reply at this stage, the doll recognizes the person doing the speaking as “mommy,” and the doll is programmed to reply with a statement, such as “I love you mommy.” Thereafter, the doll continues with the set up procedure as specified in the set-up program for the controller.
  • Verbal messages are broadcast from the loudspeaker 39 under control of the microcontroller by outputting the contents of various locations in the voice ROM, and applying that digital information to a digital-to-analog converter or equivalent virtual converter, not illustrated, forming a speech synthesizer. From there the sound information propagates to loudspeaker 39 and is broadcast. The digital form of the message is converted to the analog form that drives the loudspeaker and produces the desired verbalization of audible sounds, words and other voice messages.
  • the verbal messages and sounds are preferably human voices that are recorded as digital information in a portion of the ROM memory, which portion may be referred to as the voice ROM, using any standard technique.
  • Those verbal messages such as those earlier described, may be stored as complete sentences or, alternatively, as words and partial phrases, dependent in part on the amount of memory available or which one prefers to include.
  • the inventor's script also contains commands for the dual motor dual cam facial expression maker for achieving the facial expressions to the face of the doll that the inventor wishes to have accompanied by those verbal messages, including the verbal statements presented in this text.
  • Appendix A contains statements that are spoken by the doll and the accompanying changes to the components of the doll face to change the facial expression of the doll. The latter are arranged in two columns. The right hand column recites the position of the eyebrows, eyes, mouth and cheeks prior to (and following) the change, and the left hand column describes the change to those components from the foregoing default condition.
  • the verbal messages may include songs that are sung by the doll, music and/or special sound effects.
  • the doll is capable of singing several songs. It is also capable of producing the “pee-pee” and “poo-poo” sound effects of a child eliminating on the toilet, sounds which amuse and excite young children.
  • Memory chips have become relatively inexpensive, which minimizes any necessity for reducing the size of memory included in the doll.
  • the preferred approach is to store complete sentences of spoken words. That allows for a higher quality of sound reproduction.
  • messages may be stored as appropriate as individual words, partial phrases, and/or full expressions, and then pasted together for broadcast, e.g. concatenated.
  • the verbal message: “I want a banana” may be parsed in separate parts and stored in different areas of the memory as “I want a” and as “banana”.
  • the microcontroller selects and consecutively outputs the two sections from the memory in proper order.
  • the digitized audio may be compressed using any conventional compression algorithm during the recording process to further minimize memory requirements; and the program should then include implementation of an algorithm for decompressing that compressed digitized audio as it is played back.
  • the doll may be modified to incorporate a separate clock calendar, such as a digital clock calendar chip, in lieu of the clock in the prior embodiment.
  • a separate clock calendar such as a digital clock calendar chip
  • the device may, in addition to the heretofore mentioned clock function, be programmed to keep track of the weekly, monthly, and yearly passing of time (i.e., a calendar function).
  • the calendar would be set in a similar manner to the clock, where as earlier described, in the “set the time model”, the clock would be set to the hour of the day, the minutes of the day, and whether or not it was AM or PM.
  • the “Set the date model” would automatically occur on the program menu, permitting the parent to also set the month by inputting a number between 1 and 12, then set the day of the month by inputting a number between 1 and 31, and then set the year by inputting the appropriate four numbers for the current year, such as 2005.
  • the set-up procedure is accomplished verbally and manually, which avoids the necessity for inclusion of a visual display, such as an LCD.
  • the doll instructs the child to squeeze the right hand of the doll and to release that squeeze when the child hears the doll broadcast the correct number, starting with the year.
  • the doll then starts by speaking 2005, 2006, 2007 and so on, until the child releases the squeeze of the hand of the doll, thereby selecting the year.
  • the doll makes the same request for the correct month, and speaks the months, one, two, and so on (recycling if necessary each time the twelve figure is passed) until the child releases the squeeze of the right hand of the doll; then repeats the procedure for the day of the month.
  • the controller includes information in memory of the date that daylight savings time commences in each of the U.S., the U.K, Australia and New Zealand, all of which are English speaking countries.
  • the doll broadcasts an instruction to the player to squeeze the right hand of the doll, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct year is broadcast from said speaker, and, thereafter, terminating the recital of the years when the squeeze is released and places the name of the year last recited into a memory. Then, broadcasting the year last recited, and requesting the player to confirm that the stage so broadcast is correct or not by answering yes or no. If the answer is no, the immediately preceding three steps are repeated until the foregoing answer is YES.
  • the doll broadcasts the first year, 2005, and with each squeeze the doll broadcasts the next successive year. That continues until one stops squeezing the doll hand for a predetermined time, whereupon the doll broadcasts the last year broadcast and queries the person if the year spoken was correct.
  • the foregoing approach is in a sense forgiving. One can obtain the year date in either of the two described protocols or any combination thereof.
  • the month, day, and year stored in the memory is broadcast by the doll and the player is requested to confirm that the information broadcast is correct or not by answering yes or no; and, if the answer is no, repeating the preceding steps of setting the year, month and date from the beginning until the final answer becomes yes.
  • the clock time is set.
  • the time set up begins by broadcasting a statement to the player as to the positions of the small and large hands of a clock and requesting the player to confirm the correctness or not of that statement by answering yes or no; and, if the answer is no, broadcasting an instruction to the player to squeeze the doll hand, wherein pressure is applied to said pressure operated switch, and release the squeeze when the correct hour of day is broadcast from said speaker, followed by broadcasting the hours of the day in serial order through said loudspeaker.
  • the recital of the hours of the day is terminated when said squeeze is released on attainment of the current hour, removing pressure from said pressure operated switch, and placing the hour of the day that was last recited into a memory.
  • an instruction is broadcast to the player to squeeze the hand, and release that squeeze when the correct minutes of the day is broadcast from said speaker, and, thereafter, broadcasting the minutes of the day in five minute increments in serial order through said loudspeaker.
  • the recital of the minutes of the hour is terminated when the player releases the squeeze on attainment of the current minute increment, removing pressure from said pressure operated switch, and placing the minute increment of the hour that was last recited into a memory.
  • the answer to the foregoing query is yes, information is broadcast to the player on the meaning of AM and PM and querying the player if it is necessary to reset what the clock indicates as to the stage of the clock by answering yes or no.
  • broadcasting a query to the player to answer if the time of day is AM by answering yes or no; and, if the answer is yes, placing AM in said memory, but if no, then placing PM in said memory.
  • the player is next queried whether the player observes daylight savings time and the player is requested to answer yes or no.
  • the answer given by the player is recognized and stored in memory. If the player answers yes, then the controller is set to change the time at the beginning of the next event or happening of either changing the time forward one hour on the particular days of the year that daylight saving s time begins or moving the time back one hour on the particular day of the year that daylight savings time ends.
  • the bedtime and wake-up times for the doll may be in a similar manner either by the hand squeeze approach or a verbal answer to a question of yes or no as to whether the player wishes to set a wake-up and bed-time for the doll.
  • the preferred embodiment features a variety of play patterns.
  • Those play patterns are defined in the following table and an abbreviated version as used in a practical embodiment appears in APPENDIX B to this application:
  • Hugs & Kisses 10 Dressing & Grooming Change from Jammies to Daytime Outfit Change from Daytime Outfit to Jammies Dress Up Hair Play Jewelry 11 Go Out Restaurant/Fast Food Car Routine Plane Shopping Mall Grocery Store (Dressing) 12
  • Exercise 13 Recognition Mom Grandpa (Papa) Grandma (Nanna) Best Friend Great Grandma Great Grandpa 14 Daycare 15 Babysitter 16 Illness
  • the play patterns can be either child-initiated or doll-initiated.
  • Child-initiated Play In child-initiated play, the child commands the doll to initiate a particular play pattern. The easiest way to think of the play is to visualize a hierarchical menu system, triggered via voice activation.
  • the doll then begins the Funny Face game routine.
  • Doll-initiated Play The internal clock keeps track of the date and the hours and minutes.
  • the program of the controller uses the clock to determine the following behaviors. Eating: When told that “Its time to eat,” or when the doll decides it is “hungry” between 6 am and 10 am the doll will ask for breakfast; between 10 am and 3 PM the doll will ask for lunch; and between 3 pm and 8 PM the doll will ask for dinner. At other times the doll will ask for a snack.
  • Each of the eating routines involves different logic and accessories, such as the simulated food stuffs elsewhere herein described.
  • the doll will express a desire for bed at the bedtime specified by the user in the initial set-up procedure.
  • the programmed child-like behavior includes some behavior in anticipation. As example, the doll may speak “Mommy, it's almost time for bed.”
  • the doll may also be programmed to wake-up from the sleep condition at a set wake-up time specified by the user.
  • the doll may greet her “mommy” with an appropriate time-based phrase, such as “Good afternoon mommy!” Occasionally on waking, the doll will also announce the time of day.
  • the doll may ask for different clothing. In the morning the doll asks for her dress. In the evening, the doll asks for her nightie.
  • the doll knows (e.g. is programmed to recognize) a number of holidays. On those days the doll will occasionally speak out with “Happy (holiday) mommy!” or similar phrase, where (holiday) is the given day.
  • the doll is also programmed to anticipate the holiday, say a month in advance. In anticipation of the holiday, the doll is programmed to speak in anticipation. As example the doll may be programmed to speak “Mommy, Santa Claus is coming soon!” some weeks prior to Christmas.
  • the doll keeps track of the frequency of performance of many behaviors and tries to avoid repeating the behavior too many times in a given interval of time.
  • the doll also uses time intervals to create more realistic behavior. As example, if the doll has visited the potty recently and is asked to go to the potty again, the doll will speak: “Mommy I just went potty. You want me to try again.”
  • the simulated day of the doll is divided into a series of ten sequential time windows. Within each window, certain play patterns may be initiated by the doll, occurring with a particular frequency. Some play patterns occur only within a particular time window, such as lunch or waking, while other play patterns are present in many windows. Each play pattern is assigned a percentage chance of occurring likelihood at intervals within a window.
  • Time Windows and Play Patterns Time Window Available Play Patterns 1 Wake Up ⁇ Wake up sequence> I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 2 Breakfast I'm hungry, can I have breakfast? I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts. 3 Between Time I'm hungry, can I have a snack?* I'm thirsty. I have to go potty. I love you mommy. Let's play. Mommy do my hair? My tummy hurts.
  • the preferred doll embodiment also features annual, time-based, doll-initiated play.
  • the doll controller programmed with a series of special days, the doll controller enters an “anticipation mode” two weeks prior to the special day, and then a “special day mode” on the day itself. While in these modes, certain aspects of the simulated doll behavior are changed. Some of the regular play patterns will be changed to reflect the special day. For example, upon waking on the little girl's birthday she would say “Happy birthday mommy!” instead of (or in addition to) her normal waking behavior.
  • the doll fails to receive a response from her mom, say within ten minutes, the doll will go to sleep, essentially entering a power-down mode, during which only the internal clock remains powered up and continues to operate.
  • the foregoing allows the child to place the doll in bed and refrain from responding to the doll so the doll can go to sleep, that is, enter the sleep mode.
  • Squeezing the hand of the doll or hugging the doll causes the doll to awaken.
  • Actuation of the hand switch or hug switch is detected by the controller 20 , which is programmed to recognize the input during the power down mode as requiring restoration of the electrical power and re-commencement of the doll activity program, signifying the awakening from slumber.
  • the internal clock in the doll continues to run, and, as recalled, during the set-up procedure, the doll may have had a definite wake-up time set by the mother (or child). That wake-up time is usually some time during the morning, say 7:00 AM. Thus should the sleep mode continue through to that wake-up time, the controller detects attainment of the wake up time, and essentially wakes up the doll, placing the doll into the mode for normal activities, elsewhere herein described.
  • the doll has and can also be placed in a quiet mode, when desired.
  • a quiet mode when desired.
  • the controller is programmed to interpret the action as a command for the doll to power down.
  • the controller issues instructions to broadcast a verbal message, specifically, “O.K. Mommy, I'll be quiet now,” giving the child oral feedback of the activation, and then prevents further audible broadcasts from the doll.
  • the doll After four minutes in that condition, the doll broadcasts the query: “Can we talk now?” If the doll does not hear a response from the child within a short interval, the doll controller 20 powers down the electronics, placing the doll in sleep mode, and waits for a wake-up command. In either the quiet mode or the sleep mode, the doll is reactivated or, as otherwise stated, is awakened simply by either squeezing the right hand of the doll (e.g. operating the right hand sensor 38 ) or by giving the doll a hug (e.g. operating the hug switch 33 )
  • Discipline Routine During child initiated play, Amanda may occasionally, but rarely, misbehave. The following exchange between the doll and the child can take place.
  • the start up programming for the user would then include an additional set up step containing a display of and requesting selection of the particular language for the doll to speak. Language selection is accomplished by toggling the left and right hand sensors in the same manner as in seeing the wake and sleep times.
  • Such a multi-language doll may be attractive also to parents who wish their child to learn a second language.
  • the doll can store and be programmed to sing songs, accompany the speaking parts with music and/or sound effects, with or without parsing of short messages as above described, and with or without digital compression.
  • the doll may be programmed to recite to the player to “say my name mommy,” in which case the doll is able to confirm that the person that states the name of the doll is the same voice print pattern as that of the pseudo-mother.
  • the doll may be programmed to recognize multiple persons and distinguish between those persons based on different voice prints. As example, voice prints can be made of a person who is to be the grandmother, the grandfather, uncle or aunt of the doll or any other family members and the doll is able to recognize and distinguish between those persons.
  • bonding may represent camaraderie, where the child is leader of a band of heroes, or, conversely, a fried of the doll owner as leader of the enemy force.
  • the foregoing form of bonding goes beyond bonding as being a mere attachment to a loved and needed person.
  • the doll can be programmed to interact with different persons in different ways, depending on the voice that the doll recognizes is speaking. That is, the memory of the doll may hold different speech messages and say something different depending on the identity of the person doing the speaking with the doll. For example, the doll could say “grandma” every time the doll would otherwise say the word “mommy.” The doll could say something entirely different.
  • the preferred embodiment of the doll is of the form of a female child under thirteen years of age and, more specifically, about two years of age, and is of the appropriate facial anatomy of that age and is dressed as such child.
  • the doll produces the sounds, phrases, and words of the kind typically spoken by a two (2) year old female child by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the doll could be of the form of a human toddler and would be dressed as a toddler.
  • the doll produces the sounds, phrases, and words of the kind typically spoken by a toddler by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the doll may be of the form of a teenage or adult female, possesses the facial appearance of a teenager and is dressed as such teen age female.
  • the doll produces the sounds, phrases, and words of the kind typically spoken by a teen age female by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the doll is of the form of a male child, toddler, two-year old, tween, teen and so on, respectively dressed as such respective character, and/or is of the appearance of a fantasy character and is dressed as such fantasy character.
  • the doll produces the sounds, phrases, and words of the kind expected to be spoken by a fantasy character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the doll in still other embodiments could be fabricated in the form of a animal character, either one that produces animal sounds or is designed to speak like a human.
  • the doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the doll is of the form of a male child infant or toddler, in which case the primary play is for the user to act as a caregiver for the doll.
  • the primary play would be for the doll to serve as a best buddy for the child and do such things together, such as play games, tell stories, go on adventures, role play various characters together, and learn and play things together.
  • the doll appears as a fantasy character and is dressed as such fantasy character and produces the sounds, phrases, and words expected of a fantasy character by means of which the fantasy character asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the primary play is as an action figure, a disastrous or villainous, good or evil character
  • the young man imagines and plays-out situations, scenarios, and adventures in which he assumes the persona of the character with whom he is playing, i.e., mentally the boy “IS” the character.
  • Male action figures and characters presently available for boys play are not interactive. Some contain mechanical mechanisms that enable movement when the boy pushes a lever on the torso, appendages, or head of the action figure. Often weapons are carried by action figures, and projectiles are often launched to emulate the firing of weapons and so forth. Depending on the size of the action figure, electronics are sometimes included that contain lights and sounds of weapons or short phrases made by the action figure.
  • the action figure contains animatronics that moves the face into expressions of happiness, rage/anger, fierceness, puzzlement, excitement, boredom, fatigue, impatience, nobility, and so forth.
  • Such visual expression of actual emotions in an action figure is also believed to be revolutionary. Even more revolutionary is for the action figure to respond to a user both emotionally and with words in conversation. The figure could become angry or feign anger if the wrong response occurred. An example of that anger might be that the action figure wanted to retrieve his weapon from the user, and the user would not turn the weapon over to the figure.
  • the voice recognition feature enables a male player to assume the role of ally or enemy, i.e., friend or foe, when play commences.
  • the action figure then assumes the friend or foe relationship with the player and through Virtual Conversational Interactivity plays out battles, adventures, and other scenarios with the boy.
  • the embodiment thus enables an entirely new level of play.
  • the ability for the figure to recognize clothing, protective armor, battle gear, weapons, and so forth and interact conversationally with the player is entirely new. This enables the figure and the young player to strategize how to game or battle. It also enables the player to “command” or instruct the action figure how to battle, attack, fight, and so forth.
  • the additional technology enabling the figures to recognize either clothing or protective armor, weapons, playthings, vehicles in which it sits, and other play accessories, together with the speech recognition technology, animatronics, and programming, enables all embodiments of the invention to possess a form of artificial intelligence. That intelligence provides an additional basis for the initiation of virtual conversational interactivity and play scenarios.
  • the doll is of the appearance of a real animal, insect, fish, crustacean, or other living creature or presents that living creature as a cartoon, such as, but not limited to a dog, cat, bear, bird, bunny, reptile, horse, or pony, in which the animal's visual animatronic movements of the face and/or body suggests to the user that the doll possesses human characteristics, such as feelings and emotions.
  • sounds of the living animal such as, but not limited to, a bark for a dog, a chirp for a bird, a meow for a cat, and so forth, adds realism.
  • the tone of such sounds are varied appropriately through the doll's programming to communicate what the animal is thinking.
  • One example is the whine of a dog to indicate that something is wrong, pain, a need for attention, or begging.
  • Such is the present interactivity of a real animal with a human, and such interactivity is often back and forth in that a human might ask a real pet, “what's wrong?”, or “shut up!”, or “are you hungry?”, but not actually expect the pet to answer.
  • the dog may wag his tail and the human may believe that is a yes answer to the question, or the dog may continue to whine and the human may believe that the question was not the correct cause of the whining.
  • the programming of the doll dog causes the dog to have a need, and the child (or adult) user will need to discover the need of the animal in order to satisfy that need.
  • Such back and forth interactivity in addition to the use of the animal doll's (dog's) facial animatronics, which is anthropomorphic in nature, i.e., suggesting to the child or user that the animal has feelings and emotions. I refer to that action as Anthropomorphic Virtual InteractivityTM, “AVITM”.
  • AVITM Anthropomorphic Virtual Interactivity
  • Such AVI interactivity is back and forth in a real way that humans and communicate. Sensor devices to see if the animal has been fed, or groomed are present just as in the human embodiment.
  • a sensor in the dog's butt will enable the puppy to know if it went potty on a newspaper, or in a cat's butt if it is the litter box, so they can be toilet trained.
  • Voice recognition is present to bond the animal doll with it's owner or master (the child or user). Speech recognition is present to enable the animal doll to recognize what the human is saying, i.e., it enables the animal doll to be “trained,” to learn “commands”, to learn what is good behavior (“good dog” or “good kitty”) or bad behavior (“bad dog!” or “bad kitty.”).
  • Pet clothing can be put on the animal doll and it will know that a particular coat for warmth or a collar to go for a walk has be put in place, and enable the animal doll to “know” or assume what the intention of the owner was in play.
  • the AVI technology of the animal doll can additionally cause the animal to express feelings to the owner.
  • the doll in still other embodiments could be fabricated in the form of a animal character that is designed to speak like a human.
  • the doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • the versatility of RFID programming adds to the capability for doll play in additional embodiments in which the RFID sensor that is used to read at least two separate RFID tags essentially at about the same time. This capability is illustrated in connection with the potty accessory 40 in FIG. 4 to which reference is again made.
  • doll 10 includes an RFID sensor positioned in the butt end or rear and that sensor reads the RFID tag contained in the doll potty 40 , whereby the doll “knows” it is seated on the potty.
  • One of the articles of clothing worn by the doll is a diaper and that diaper would also contain an RFID tag so as to identify the diaper to the RFID sensor in the doll butt. In that way the doll “knows” it is wearing a diaper. If the program, as earlier described in this specification, calls for the doll to request to go potty.
  • the doll program thereby would have the doll request that the diaper be removed first so that the doll is able to go to the potty without interference from the diaper. After the diaper is removed, and the doll broadcasts a “thank you, mommy” or like message, and asks again to be placed on the potty.
  • the doll will know if the doll is seated on the potty since the RFID sensor detects the RFID tag in the potty, and the doll will continue with the various programmed sounds of defecation earlier discussed.
  • the RFID sensor in the doll will sense and read both RFID tags.
  • the doll knows that the player neglected to remove the diaper. Accordingly, the doll will provide a message possibly correcting or berating the player, and decline to defecate until the diaper is first removed.
  • the sensor can achieve wider detection coverage and is able to pick up RFID tags displaced at greater distances from the sensor.
  • the RFID tag in the diaper need not directly overlie the RFID tag in the potty when the doll is seated on the potty wearing the diaper.
  • the sensor reads them both, and can take read them in any order called for by the program.
  • an embodiment of a doll need not include each and every feature described herein. Some embodiments may contain a full complement of features, while other embodiments may omit one or more features and contain a lesser number of features. All such embodiments are included in the invention. These various embodiments of the invention may be addressed separately and/or in combination in the claims which follow in this specification.

Abstract

A high-tech doll produces human-like expressions, recognizes words, and is able to carry on a conversation with a living person, as example, addresses time based subjects, such as the food to eat at various times of the day, and expresses attitudes, emulating a living child. A child player acquires an emotional bond to the doll whilst the doll appears to bond to the child. The doll exhibits facial expressions produced concurrently with spoken words or separately to provide body language, representing emotions, such as happiness, sadness, grief, surprise, delight, curiosity, and so on, reinforcing the emulation of a living child. Additional features add to the play.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This non-provisional application for patent is related to an earlier-filed provisional application for patent of the inventors, Ser. No. 60/748,391, filed Dec. 7, 2005, entitled Interactive Hi-Tech Doll, the entire content of which is incorporated herein by reference in its entirety. Applicant claims the benefit under 35 U.S.C. 119(e) and 35 U.S.C. 120 based on the foregoing provisional application.
  • FIELD OF THE INVENTION
  • The present invention relates to high-tech dolls and, more particularly, to an interactive talking doll that in both speech and facial expression simulates the attitude and conversational capability of a very young child, chews food, asks questions, understands answers, sings songs, plays games and, all in all, is a great companion for a young child.
  • BACKGROUND
  • One plaything that's endured over the years is the doll. Typically, a young girl's parents will at some time purchase a doll as a gift for their daughter, fully expecting the gift to be received with appreciation. The parents are never disappointed. Often the girl's grandparents do the same for their granddaughter. They regard the play with dolls to be a wholesome activity and, importantly, an activity that's a good deal of entertainment for the child. One finds that young girls in that doll play develop a tender, nurturing relationship with the doll, often mimicking their mother's behavior toward themselves in “mothering” the doll for hours on end and/or treating the doll as a friend or confidant.
  • Today for the most part dolls are fabricated of better materials, are more real or life-like in appearance, feel and dress, and, importantly, are more sophisticated technologically than in the past. In other words, the dolls of today can be more “high-tech.” The addition of sophisticated technology is intended to increase the play value of the doll, giving the child additional challenges and better engaging the child's creativity. One recently marketed doll that significantly advanced the doll technology is the Amazing Amy doll, earlier marketed by the Playmates Company of California and described in my prior patent, U.S. Pat. No. 6,554,679, granted Apr. 29, 2003, entitled Interactive Virtual Character doll.
  • One expects that the term “doll” may have a different meaning to different persons. Hence, before proceeding further into the background to the present invention, it should prove helpful to provide some definition of “doll.” That should aid one to better understand the prior art and the invention, or at least ensure that the reader's understanding of the meaning is the same as the applicant's. Although the foregoing paragraph describes a particular doll that represents a small child in appearance, one realizes that the technology is not so limited in application. To avoid unnecessarily limiting the present invention, a broader definition is appropriate.
  • According to conventional thinking, a doll is a figure that in appearance looks like a person, usually a small girl or boy, and constitutes a plaything for a child, typically for a young girl. In more modern times that kind of plaything has de-facto expanded in definition in one direction to include small male figures, referred to as action figures, that are played with by young boys without the stigma of femininity (e.g. dolls for boys), and even virtual beings resident in an electronic game and visible to the child only as an image on the liquid crystal display of the electronic game The scope of the doll category has also expanded in definition in another direction to include animals, cartoon characters and fantasy characters. Thus, the term doll as used herein is intended to encompass all such forms and should so be understood when reading this specification and the described technology that improves all those structures. That's true, even when the preferred embodiment of the present invention, as will be seen, is of the form of a small child, as later herein described.
  • In all of the foregoing forms described in the initial paragraphs, the doll contains a body or torso, a head and at least some appendages, such as arms and/or legs. Assuming the doll is to mimic a young female child, the arms, legs, torso and head are typically designed to appear natural and life-like in appearance and feel.
  • The discussion of the background in my prior patent, U.S. Pat. No. 6,554,679, granted Apr. 29, 2003, entitled Interactive Virtual Character doll (the “'679 patent”), provides an ample summary of the innovations that preceded that invention in an interactive virtual character doll, which is also of interest to the present invention. Indeed, the entire content of that prior patent is believed material to the present invention and that content is incorporated by reference herein in its entirety.
  • The '679 patent recognized various types of player inputs to the doll that resembled real life activities, such as feeding, grooming, playing, dressing and the like. That input was accomplished in part by inclusion of two sensors carried in the doll that, with the aid of an electronic controller in the doll, essentially recognized objects. One sensor was located in the mouth of the doll and recognized insertion of various simulated fruits and vegetables into the mouth of the doll. Each simulated fruit or vegetable carried an electrical resistance of a value that was unique to the object. That resistance was included in a series circuit between two electrical contacts on the simulated food object. By engaging electrical contacts carried on the sensor with the corresponding contacts on the simulated food object, the resistance is effectively placed in a resistance measuring circuit in the electronic controller in the doll. The resistance measuring circuit measured the value of the electrical resistance between those contacts, thereby identifying the particular food by the resistance value assigned thereto by the doll designers, and the controller processed that information. The second sensor carried on the back of the doll recognizes the insertion of (e.g. dressing of) an article of clothing on the doll.
  • Further, a magnetic sensor, a magnetic switch, was also carried in the head of the doll. The magnetic switch identified the presence of a hair brush that was moved across the hair of the doll by the child, when the magnet that was carried in the hair brush actuated the magnetic switch as the hairbrush was swept across the hair of the doll.
  • Many dolls (and other character toys) emulate the emotions of the kind of character the respective doll or character toy represents. The emulation uses sound effects and/or facial sculptures. Those sculptures may be of plastic, vinyl, porcelain, soft sculpted fabric or plush, or other means to depict the emotion. For example, the face of the doll is sculpted to appear sad. The child holding that doll may believe the doll character is unhappy, because the doll character has an appearance that, through experience, the child has learned is consistent with sadness or, in the case of an animal or other non-human form of doll, the character's anthropomorphic appearance of sadness. Those actions are recognized by the child as being associated with sadness. The illusion of sadness is reinforced if a crying or whimpering sound is introduced and broadcast from the loudspeaker on the doll.
  • Realism is further enhanced by introduction of mechanical animatronics, adding the opening and closing of the mouth of the doll, the opening and closing eyes/eyelids of the doll, and some additional movements of facial parts, such as eyebrows and lips that were somewhat synchronized to the audio sounds originating from the doll.
  • However, until the present invention, no character doll was known that could be trained to at least recognize the child's voice. Then too, no character doll was known that was able to actually carry on a conversation with the child, enabling the doll to actually interact verbally, like a person in a tete-a-tete by giving an appropriate response to the child as a conversation between the doll and the child occurs, back and forth. Although an illusion, the tete-a-tete appears very real to the child who's playing with the doll.
  • In addition, until the present invention no character doll was known that enabled the doll to react with emotions to what the child said or did not say, such as being upset, angry, sad, happy, surprised exuberant, excited, sleepy, hungry, fussy, needy, lonesome, in want of companionship, play, or its mother in the speech, phrases, sound effects, that convey readily recognized emotions. That kind of conversational interactivity is combined for the first time with facial animatronics that actually emulate the virtual emotional feeling expressed by the character of the doll.
  • Though the Internet applicant learned that academics at the Massachusetts Institute of Technology previously succeeded in some degree to translate the emotions of a small child within a robot, more specifically within a skin-less robot head. The MIT study focuses on the construction of robots that engage in meaningful social exchanges with humans.
  • Although the field of robots is far removed from being a child's playthings and toys, it may be of interest in learning of the sophisticated open-ended learning procedures and the human emotions assertedly reproduced as facial expressions. The reader may access the internet and review the robotic head, referred to as Kismet, at the website www.ai.mit.edu/projects/kismet. The robot's vision system consists of four color CCD cameras mounted on a stereo active vision head. Two wide field of view (fov) cameras are mounted centrally and move with respect to the head. These are 0.25 inch CCD lipstick cameras with 2.2 mm lenses manufactured by Elmo Corporation. They are used to decide what the robot should pay attention to, and to compute a distance estimate. There is also a camera mounted within the pupil of each eye. These are 0.5 inch CCD foveal cameras with an 8 mm focal length lenses, and are used for higher resolution post-attention processing, such as eye detection.
  • The foregoing appears to involve very sophisticated techniques to enable the robot “brain” to learn, as well as to convey reaction in a facial expression, using controlled actuators that are controlled by a series of networked desktop—computers to control the simulated lips and the eyes and eyelids of the robot. As example, happiness is represented in the robot head by lips turned upwardly at the sides and fully open eyes to produce what the robot developer perceives to be a happy face. However, the robot head is unable (e.g. unequipped) to talk or carry on a conversation with a living person, as it's only auditory communication is babbles of no known human language. It is clear from the website that much thought and government-funded work has gone to developing the Kismet robot over the past ten years or so to the present stage of development, but the device still appears to be a work-in-progress to explore how socially stimulated learning is served by exploiting the types of interaction that arise between a nurturing caretaker and an immature learner. The sorts of capabilities targeted for learning are those social and communication skills exhibited by human infants within the first year of life. The demonstrated robot head is physically and electronically incomplete, is obviously unsuited as a plaything (and likely will never be a plaything). Even if the developers of Kismet someday succeed in developing a human-like robotic infant head that is able to learn, on its own, such a device would hardly be capable of serving as a child's baby doll, companion and friend as does the present invention. Nor would it be affordable as a consumer product intended for use by a child.
  • Devices, such as lockets that transmit certain signals recognized by the character doll, rings containing magnets that are detected and recognized, and other such physical devices worn by a child have been used in the past to enable the character doll, to recognize the child wearing or holding the device or, in the case of a little girl's doll, e.g. the pretend mother of the doll. However, until the present invention, no inanimate character doll was able to recognize the voice of a child or the pretend mother as would represent a bonding of the doll with the person. The present invention enables the character of the doll to actually recognize the respective voice of the owner and/or player of the doll, the child, and/or the pretend mother of the child. Hence, the doll “knows” that the person to which the doll character is bonded is present.
  • Bonding is the close personal relationship that forms between people, such as parent and child. The person in that relationship experiences emotional need with the other person and communicates that need through expressions of love to that person when present. Bonding is to express the need of the presence of the one to which the character is bonded. Additionally, no character doll is known that actually emulates the emotional bonding of the character doll to a child, emulating actual emotional attachment to the owner, player and/or child and a need for that person.
  • In the present invention the emulation or illusion of bonding is achieved through combining, enabling the doll-to know the bonded person is present; animatronics, expressing emotional facial expressions regarding the feelings of the character for the bonded person, the make-believe mother, and emotional conversational interaction with. For the player, owner and/or child player, the foregoing creates for the first time an intimacy and feeling of bonding with the character because of the actual appearance of real communication and understanding between the character and the bonded person. The doll may request something of the child and the child in response declines that request (e.g. says “no”) which the doll understands. In response, the demeanor of the doll changes to disappointment and sadness, which the child observes. The child recognizes thereby that it has “hurt” the doll and is “touched” thereby in the heart. How can this action not produce an emotional bonding?
  • The doll invention is able to recognize accessories without requiring the electrical contacts and electrical resistors used in the preferred embodiment of the prior '679 patent. RFID readers and RFID tags assist in accomplishing the function of identifying the accessory, without physical contact between the tag and reader. That structure is transparent to the child. As an advantage, the invention is more magical to a young child.
  • Additionally, in the prior '679 patent, the sensor inside the doll's mouth required direct physical contact with the simulated food (piece) placed in the doll's mouth in order to make the appropriate eating or drinking sounds when the food was inserted in the mouth of the doll. Using electrical resistors to simulate feeding an animatronic doll with a mouth that is undergoing the motion of chewing is quite difficult because of the need for the physical contact of the metal contact points connected to the resistor in the food accessory play piece and the two metal sensors in the mouth, when the mouth is moving. The prior Amazing Amy doll, hence, did not possess that capability. As an additional advantage, the present invention is able to easily detect the particular food that is inserted in the mouth, even though the mouth is opening and closing.
  • OBJECTS OF THE INVENTION
  • Accordingly, a principal object of the present invention is to provide a interactive high-tech doll that is able to preoccupy the interest of a young child.
  • A further object is to provide an interactive doll that is able to recognize at least its mother's voice so the doll can know with whom the doll is conversing.
  • A still further object of the invention is to provide a doll that may in one embodiment interact with such things as singing songs and playing games with anyone or in other embodiments only with those whose voices the doll recognizes.
  • And, an additional object of the invention is to create a doll that's capable of carrying on a limited conversation, such as by asking a question and recognizing the answer given in reply, and, provide an additional response.
  • SUMMARY OF THE INVENTION
  • In accordance with the foregoing objects and advantages, a high-tech doll according to the present invention produces human-like expressions, recognizes words, is able to play games, carry on a conversation, and express attitudes, emulating a living child. The seeming magic of the doll appearing to actually respond emotionally to what a child says or does not say, or to what a child does or does not do when the doll verbally prompts the child, a child acquires an emotional bond to the doll. The doll is capable of bonding to the child and, conversely, the child is able to bond to the doll. The doll recognizes what the child says in response to prompts from the doll and is capable of carrying on a limited conversation with the child. In so doing, the doll exhibits facial expressions, produced by facial animators in synchronism with spoken words or separately, to effectively provide visible facial body language, representing any of happiness, sadness or grief, surprise, delight, curiosity, and so on to reinforce the emulation of a living child. The doll recognizes the voice of the child at particular points in the doll's programming.
  • A preferred embodiment of the invention may include the feature of voice recognition, wherein the doll is trained to recognize a word (or words) spoken in the child's voice (and that word spoken in that voice, e.g. a voice print, is categorized by the doll as the “mother” of the doll), the feature of speech recognition of words spoken to the doll by the mother and any others, both of the foregoing features or both a limited amount of voice recognition and mostly speech recognition. With a limited vocabulary, the doll either recognizes that the doll is being addressed by the pseudo-mother or some one else. The doll creates a response to the child's verbal stimulus by answering the child with spoken words, usually accompanied by an appropriate facial expression or, alternatively, answers only with an appropriate facial expression as body language. Ideally, the child is taken in by the illusion that the doll is a living person.
  • In other more sophisticated embodiments, the voice recognition system of the doll is able to voice print members of the “family” in addition to the mother, such as grandma, grandpa, uncle and aunt and other family members, is able to identify which family member is addressing the doll, and provide individually tailored verbal messages or other responses to the particular family member identified. In other play patterns, the doll is able to establish relationships with respective family members and those relationships are different in some respects to the relationship of the doll has with the bonded person, the pseudo-mother.
  • Further in accordance with another aspect to the invention various simulated food accessories may be inserted in the mouth of the doll, the doll recognizes the food and the doll simulates chewing that food. The food accessories contain RFID tags, electronically identifying the respective articles, while the doll includes RFID sensors, controlled by the controller of the doll, that reads the RFID tags carried by the simulated food products that are inserted into the mouth of the doll. That RFID tag reading is transparent to the child. The doll may then broadcast spoken words indicative of knowledge of the particular simulated food product, further adding to the illusion of reality.
  • The foregoing and additional objects and advantages of the invention, together with the structure characteristic thereof, which were only briefly summarized in the foregoing passages, will become more apparent to those skilled in the art upon reading the detailed description of a preferred embodiment of the invention, which follows in this specification, taken together with the illustrations thereof presented in the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings:
  • FIGS. 1A and 1B are respective pictorials of the preferred embodiments of the doll viewed from the front with the doll wearing a dress and the doll unclothed, respectively;
  • FIGS. 2A and 2B are like pictorials of the respective views of the doll of FIGS. 1A and 1B, respectively, as viewed from the rear;
  • FIG. 3 is a block diagram of the electronic package carried in the embodiment of FIG. 1;
  • FIG. 4 is a pictorial of a potty accessory for use with the embodiment of FIG. 1;
  • FIG. 5A is a block diagram illustrating the actuators located in the head of the doll for controlling movement and/or positioning of various parts of the doll face, such as the lips, jaw, cheeks, eyebrows and the like;
  • FIG. 5B is an exploded view of a preferred mechanism for controlling movement of various parts of the doll face incorporated in a practical embodiment of a doll in lieu of that of FIG. 5A;
  • FIG. 5C is a side view of the mechanism of FIG. 5B as assembled viewed from one side and FIG. 5D is a side view of that mechanism from the opposite side;
  • FIG. 6A is a pictorial of a dress in which to clothe the embodiment of FIG. 1 illustrating the position of the associated RFID tag in that clothing article and FIG. 6B is a pictorial of a nighty in which the embodiment of FIG. 1 is dressed for bed-time and showing the respective RFID tag location;
  • FIG. 7A illustrates various food articles that are recognized and used by the doll of FIG. 1 in play incorporating RFID tags with the respective articles;
  • FIG. 7B illustrates a dish accessory for play with the embodiment of FIG. 1;
  • FIG. 7C illustrates a toothbrush accessory used in play with the embodiment of FIG. 1; and
  • FIGS. 8A through 8F illustrate a variety of facial expressions produced in the face of the doll embodiment of FIG. 1 during play.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • The description that follows presents the best mode presently contemplated for carrying out the invention. Both the general principles of the invention and specific details for a practical embodiment of the invention are disclosed. Those disclosures should not be taken as limiting the scope of the invention, which is best determined from the numbered claims appended to the end of this specification. The invention is described in reference to a doll that emulates a living child. The doll interacts with a player and implements a virtual character, and, in the preferred embodiment, carries out the daily activities of an active child or incidents in a child's activities. It is understood that other characters both real and fictitious and even other figures and objects may be implemented in accordance with the invention, without departing from the scope of the invention.
  • FIGS. 1A and 1B are pictorial front views of one embodiment of the doll 10, dressed and undressed, respectively, and FIGS. 2A and 2B illustrates those dolls from the rear. Referring to FIG. 1B, doll 10 contains an outer shell or torso 12 and a head 14. That torso includes a number of parts that are attached together, such as sewn together with thread: Arms and legs of padded vinyl, as example, such as Kraton® based soft skin, or rotocast polyvinyl chloride (“PVC”) or the like that is typically stuffed with a filler material, as example, polyfil, and are shaped to resemble and flex like human skin and a short-sleeved and short-pant sweater-like portion 11 of fabric material that is soft and attaches to one end of the arms and legs. The head 14 of the doll is formed of a hard stiff plastic skull that is covered by a molded-to-shape skin that's formed from a soft-to-the-feel thermoplastic elastomer (TPE) material or the like that is molded to shape to fit over, overlie, the plastic skull (not visible in the figure) and contains openings for the eyes and mouth of the doll that are included in the underlying skull. The covering skin contains a head of hair 13, preferably formed of strands of Nylon, that's fastened to the skin. The doll head 14 is attached by a neck portion indirectly to the torso and is carried by that torso.
  • The interior of torso 12 of the doll is hollow and contains a plastic box 18, represented in dash lines, that houses the active electronic components, later more fully described, as well as a surround of bulk padding material to fill remaining space. Access to the interior is made by opening the Velcro-closed seam 19 on the backside of the doll in the sweater-like portion 11 of the torso to expose a cover panel 16 of the plastic torso box and allow access to the battery compartment should battery replacement become necessary. That seam is closed by a Velcro® fastener, not illustrated, that closes off access to the internal hardware and electronics, but which may be repeatedly pulled open and then re-closed. The shape of doll 10 and associated appendages are not particularly important conceptually to the invention, as becomes apparent from the description, but in this embodiment, preferably, conforms with the proportions and appendages of a child. Due to the limpness of the sweater-like material 11, the attached appendages may be manually placed in a variety of positions, wherein the doll may be sitting or lying down. In other more complex embodiments the doll may be articulated if desired, either fully or partially, and/or have arms, legs, and the torso molded instead of partially made of fabric and, if so, may be placed in a greater variety of positions, including a standing or walking position. The overall size and weight of the torso 12 is shaped and sized to allow the player (i.e., typically a young child) to comfortably hold the doll in the child's arms without being unduly burdensome. As an example, the torso and head may be shaped as shown in the drawings, and may have an overall dimension (height×width×thickness) about 20″×7″×3″ (inches).
  • The electronics package 18, represented in a dash-line block in FIG. 1B and in a more detailed block diagram in FIG. 3 is referred to by that latter figure. The package includes a programmable electronic controller 20, the doll controller. The doll controller is a programmed general purpose electronic processor or microprocessor (or microcomputer), also known as a microcontroller, preferably in the form of a single integrated circuit (“IC”) chip, which integrates a speech-optimized digital to analog converter (DAC) and an analog to digital converter (ADC) into a single IC chip capable of accurate speech recognition as well as high quality low data-rate compressed speech. The IC preferably provides the further on-chip integration of features, including a microphone pre-amplifier, twin-DMA units, vector accelerator, hardware multiplier, three timers and 4.8 kbytes of RAM. In addition to read-only memory (“ROM”) an interface for external ROM including, input and output ports, various utilities and is controlled by a program, represented by dash-line block 21, that's installed in the external ROM 22. Controller 20 includes various inputs 26 and outputs 28 and is powered by an external battery 30, which may be rechargeable and/or replaceable. Electrical current from the battery is supplied through the main power switch 31, an on-off type switch. Switch 27 is a reset switch, which may be a momentary operate type switch. In addition to internal memory, the controller may include access to external or add-on memory 22 provided on a separate memory chip. and electrically erasable programmable read only memory (“EEPROM”), not separately illustrated, which retains the memory contents even when electrical power is removed from the circuit. The inputs and outputs are electrically connected to various sensors and other components, such as the microphone and loudspeaker, later herein described, by electrical conductors or cabling, not illustrated in the figure to preserve clarity.
  • The controller is programmed to perform the various actions described herein, including the utility functions of a clock-calendar, speech synthesis and the process of speaker independent (speech) recognition and dependent (voice recognition). Although those and other utilities are included as stages in the programmed microcontroller 20, as an aid to understanding of the invention those functions may be separately represented in the figure in dash line blocks associated with the programmed microcontroller 20 with speech and voice recognition illustrated as block 25, and electronic clock-calendar as block 24
  • Loudspeaker 39, also installed in the chest of the doll, connects to an output 28 of the controller 20. The loudspeaker converts any verbal information contained in the electrical signal that is output from controller 20 to an audible sound, suitably an analog sound, and acoustically broadcasts that sound to those near the doll who would be listening. As is conventional that output from the controller may include the function of and/or be supplied from a digital-to-analog converter. If desired an amplifier or other sound reproducer may be coupled in between the digital to analog converter and the loudspeaker for enhanced sound. A microphone, also installed in the chest of the doll, connects to an input of controller 20. At particular points in the programmed place of the doll, the doll “listens” for an answer to a particular prompt or question that the doll has spoken. At that point and for a particular predetermined length of time, usually several seconds, the microphone receives responses and sound from those persons who are nearby, and converts any audible verbal information or other sounds contained in the analog speech output from those nearby into a digital signal, and inputs that signal to controller 20. The speech recognition technology, programming, algorithms, and recognition nets/sets compares the converted analog, now digital, information received with the digital information that the doll was programmed to anticipate receiving and sends to controller 20 a “yes,” indicating the information expected was received, or a “no,” indicating the information received was unexpected. The programming causes the doll to react verbally and animatronically by means of facial expression, when the word is one that was expected by the programming, or to a different reaction when the word was one that was unexpected or unrecognized, as the case may be,
  • In effect, the doll appears to speak like a person. The doll issues verbal instructions and other information or requests to the child player by essentially speaking those instructions, which are played, that is, broadcast through the loudspeaker. In so speaking, the face of the doll is also animated: The mouth of the doll opens and closes, the eyelids may flutter and so on like a live person. The doll may also prompt the player at times during the course of an interactive conversation to respond with one of several possible words or phrases that the doll (or anyone else) might anticipate hearing in reply and, when recognized by the doll, responds with an intelligent reply or behavior.
  • The various sensors in the doll include a mouth sensor 32, located inside the doll mouth, that aids in identification of foods (and other objects) that are placed in the mouth of the doll; a hug sensor 33 (e.g. a push button switch), located in the front mid-section of the doll, that aids in determining if the doll is being hugged; a butt sensor 34, located in the rear buttock of the doll that aids in determining when the doll is seated on an accessory, such as seated on the simulated toilet 40 (of FIG. 4) to go “potty;” a clothing sensor 35, located beneath the lower neck, that aids in identifying the clothing worn by the doll; a brush sensor 37, located at the front top of the head of the doll, that aids in detecting whether the hair of the doll is being brushed. The latter sensor may be in the form of a magnetic switch. And, of course, there is the audio sensor 36, a microphone, located in the front chest of the doll that detects sound and converts sound to an electrical signal and thus aids in the electronic recognition of particular sounds and verbal information. A force sensor 38 is included in the right hand of the doll to detect if the right hand is being squeezed. Another force sensor 38B may be included in the left hand to detect if the left hand is being squeezed.
  • The term “aid” was chosen to describe the action of the foregoing sensors, because in order for the action to occur, it is necessary to have the output signals of the respective sensors or switches interpreted by controller 20 before the identification or determination can be regarded as complete. The foregoing sensors and switches are not visible to the child during play with the doll so as not to detract from the illusion that the doll is a living person.
  • Mouth sensor 32, butt sensor 34 and clothing sensor 35 are formed of radio frequency identification tag readers (“RFID reader”), a technology that's previously found application in inventory control and in supermarket applications. The terms sensor for RFID tags and RFID reader and RFID tag reader (that essentially defines the respective sensor) are synonymous terms. The RFID tag reader detects RFID tags that are placed on or in products. Those tags contain encoded information about the product to which the tag is attached. Such tags may be “smart” in which the information on the tag may be changed (e.g. reprogrammed) or “dumb,” in which the information does not change, the latter being the kind preferred in the present application.
  • Very small in size, about one inch square, RFID tag reader 32 is installed in the upper roof of the mouth of the doll. That reader is coupled, wired, to an input and an output, 26 and 28, of controller 20 (FIG. 3), permitting the controller through the tag reader to interrogate an RFID tag during the course of the system program of controller 20 and receive the output, such as the identification information contained in an RFID tag (not illustrated) that is properly oriented to and positioned in the doll mouth adjacent reader 32.
  • The RFID tag reader is able to pick up information from the RFID tag on the food item even though that reader may be spaced by as much as an inch from the food item. Thus, even as the mouth of the doll opens and closes by a distance of less than one inch to simulate the act of chewing and varying the distance between the tag and reader, the RFID reader inside the doll's mouth is still able to read the RFID tag carried by the food accessory and glean the identity of the food accessory.
  • The inputs and outputs of the second RFID reader 34, installed in the lower back of the doll, specifically in the butt of the doll, are also coupled, that is connected by an electrical conductor or wired, respectively, to respective output and input, 28 and 26, respectively, of controller 20. The second RFID reader permits the controller to interrogate an RFID tag during the running of the system program by the controller and receive the output, such as the identification information contained in an RFID tag 34′ that is installed within, as example, an accessory seat, such as a toilet seat of a child's potty 40, pictorially illustrated in FIG. 4. The tag is read when the RFID tag in the seat is properly oriented and positioned relative to RFID reader 34 in the butt region of the doll while the doll is in the seated position on the accessory toilet seat.
  • The third RFID reader 35, installed in the back of the upper right shoulder or neck of the doll, is similarly electrically connected to an input and output of controller 20 so that any RFID tag that is positioned closely adjacent to and properly oriented parallel to the reader, such as occurs when a tagged clothing article of clothing is worn by the doll, may be polled for information by the controller when required by the program during the course of the doll's activity during play. As a result, the controller is able to identify the particular clothing article that is being worn. One article of clothing may be a dress 41 and another a nighty 43, as respectively pictorially illustrated from the rear in FIGS. 6A and 6B to which reference is made. Dress 41 includes the identifying RFID tag 42 on the upper side by the collar of the dress so that the tag aligns with reader 35 when the dress is properly fitted on the doll. Likewise, nighty 43 includes the identifying RFID tag 44 that is interrogated by reader 35 when the nighty is properly fitted on the doll. The dress that appears on the doll in FIG. 1A is another article of clothing that may carry an RFID tag in the foregoing manner.
  • Referring again to FIG. 1, switch 23 serves as a position or movement sensor and is of any conventional design. The switch actuates if the doll is lifted or moved, causing a small metal ball inside the switch to move inside a metal cylinder, closing a switch at one end or the other of the metal cylinder. Switch actuation thereby provides a signal to doll controller 20 which the controller is programmed to interpret whether the doll is lying down, sitting up or is upside down and take the appropriate next steps called for by the programming, such as later herein described.
  • The microphone 36 is output to an input 26 of controller 20 through which any audio picked up by the microphone is processed by the voice and/or speech recognition program 25 of the controller. The microphone picks up verbal information and applies that information via controller 20 as an electric signal to voice recognition processing. In a practical embodiment the foregoing function may be assisted by an analog-to-digital converter, a known device, if one desires or finds need to apply digital signals to the controller, and then is processed by the voice speech recognition program 25.
  • Although voice recognition and speech recognition are used interchangeably by those in the field, for the purpose of this application speaker dependent speech recognition is referred to herein as voice recognition. This type of technology creates a “voice print” by requesting the user to speak a particular word multiple times, usually twice. That speaker dependent word now stored in memory is user specific, not word specific. That difference distinguishes what is herein called voice recognition from speech recognition. Voice recognition technology enables identification of a specific word (or words) spoken by a specific user. In contrast, speech recognition enables identification of a specific word (or words) of a particular language and dialect, such as American English spoken spoken by anyone, and is thus speaker independent, depending on how that speech recognition technology is configured.
  • The recognition algorithm of the program detects whether the voice is one that was programmed into the doll during the setup procedure, that is, for which the program stored a voice print, and, when the recognition is confirmed, allows the controller to recognize the word or words that were used to make the voice print when spoken in that voice. Speech recognition is quite common nowadays, and is widely used in equipment. As example, speech recognition is used by the telephone company computers to receive (and respond to) customer queries for service and the like over the telephone. Speech recognition is also found in PC programs for the retail market like ScanSoft's Dragon Naturally Speaking® program and ViaVoice® program of the IBM company.
  • One preferred form of controller 20 for a practical embodiment of the invention is the controller chip that is available from Sensory, Inc. as model RSC 4128, a microcontroller chip with, among other conventional elements, 128 kilobits onboard read-only memory (“ROM”) to store data and an external memory interface, enabling interface with additional memory 22 that may be desired to store a greater number of words, phrases, sentences, facial expression tables and programming, speech recognition nets, speech recognition programming of the RSC 4128, and the programming of the logic and resultant behavior of the doll. If desired, a separate chip of flash memory can be added to separately record speaker-dependent voice recognition.
  • FIG. 7A pictorially illustrates a number of the food accessories that the doll may be programmed to recognize when the accessory is inserted in the mouth of the doll, including a cookie 51, milk bottle 53, carrot 54, pizza 50, pancakes 58, and juice 59. As shown in FIG. 7B, a baby's feeding bowl 55 contains three separate areas for holding three different spoons with each area holding a small portion of the simulated food in the associated spoon. In other words, one spoon contains spaghetti 56, another contains macaroni 57 and the third contains cereal 52. The simulated food that is held in a respective spoon may also be recognized when inserted in the mouth of the doll. Additionally, as shown in FIG. 7C a toothbrush 60 may be recognized when inserted into the mouth of the doll after the doll requests the player to brush the teeth of the doll following the partaking of food or before bedtime. The foregoing accessories are suitably fabricated of plastic that is decorated with the appropriate visual appearance of the respective food product that's being simulated and/or implement that the child recognizes (or learns what the accessory is intended to represent).
  • Referring again to FIG. 7A and cookie accessory 51, an RFID tag 51′ is embedded in the plastic of the simulated cookie accessory and contains the information that identifies the accessory. Like tags are also illustrated in dash lines in the other accessories shown in the figure (and those in FIGS. 7B and 7C as well), but are not labeled. The RFID tag in a respective accessory is uniquely configured to enable the RFID reader inside the doll to wirelessly identify the unique configuration of information in the tag and communicate that information to the doll controller 20. By means of the controller programming, the controller is able to associate each unique configuration with a particular one of the accessories for the doll, and thus let the doll “know” the identity of the accessory inside the mouth of the doll, what the doll is wearing, or on the object which the doll is seated, and so on. In the preferred embodiment, the foregoing identification is deferred until the program run by controller 20 of the doll achieves a state at which the program requires the reader to determine if the sensor is detecting an RFID tag. Until that stage in the program is reached, the doll does not make any kind of verbal or behavioral indication of the identification.
  • In practice RFID reader 32 is approximately one inch square in size and includes a small antenna, not illustrated, that is mounted parallel to and just above the vinyl skin in the top (roof) of the mouth of the doll. In order for the reader to read the RFID tag, the tag should be located directly below (e.g. adjacent) and parallel to the antenna in the RFID reader. Reader 32 senses the food accessory (e.g. senses the RFID tag in that accessory) when the accessory is inserted into the mouth of the doll. Because the very small sized reader 32 is located in the mouth of the doll, it is now possible to move the lips of the doll (varying the distance between the RFID tag and the tag reader) to simulate chewing when simulated food, such as those items illustrated in FIG. 7A, are inserted in the mouth of the doll. The RFID tag always remains in RF range, even though the mouth is open wide or is moving. Where the particular food article is especially thick, too thick for insertion into the mouth of the doll, then a portion of that article is formed with a reduced thickness portion decorated to simulate a piece of the food article that was previously eaten. That reduced thickness portion fits inside the mouth of the doll and contains the RFID tag. Cookie 51, and pancake 58 in FIG. 7A are shown to contain a simulated partially eaten portion.
  • Importantly, the mechanical mouth movement that emulates chewing (and sucking or sipping) does not interfere with the RFID reader's ability to read the identification of the food accessory, a novel feature. With the foregoing RFID tag and reader structure, the identification of the simulated food that's inserted in the mouth of the doll is transparent to the young child. That transparency reinforces the illusion that the doll is actually a live person or is magical.
  • The small size of the RFID reader 35 in the doll's back, more specifically, located at the back beneath the neck of the doll, enables the RFID tag on the clothing to be read transparently once the clothing is placed on the doll properly and the Velcro attachment on the clothing is closed to hold the RFID tag directly over the RFID reader.
  • Additionally, the RFID reader 34 in the doll's butt enables the controller to know (e.g. detect) when the doll is actually seated on an accessory that contains an RFID tag, and identify that accessory. This is particularly important in the potty accessory 40 of FIG. 4. Once placed on the potty, the illusion that needs to be conveyed to the child, consistent with the illusion that the doll is alive, is that the doll knows whether or not she is sitting on the potty and that “she did it!” if a pee-pee or poo-poo sound occurred while the doll was so seated. The face of the doll in that event will present concurrently a satisfied demeanor. If the doll had a simulated bowel movement or urination before reaching the potty, then the doll had an “accident” and should be a little upset at her mistake. The demeanor of the doll face concurrently would be changed by the facial control actuators to represent that the doll is upset.
  • As those skilled in the art appreciate, the incorporation of an RFID tag reader in the butt (e.g. behind) of a doll (or the backside thereof), so that the accessory on which the doll is seated, if any, may be identified, is novel. Further, the use of the foregoing identification of an accessory seat 40 in order to initiate conversational interactivity with the child is also novel. Applicant refers to this feature “Virtual Conversational Interactivity™”
  • The facial expressions of the doll head are varied and controllable. Those expressions include those that accompany natural speech, referred to as natural speech expressions; a smile, chewing expressions, listening, sleepiness, yawning, surprise, unhappiness, crying and excited. Actuators, such as electromechanical or electromagnetic actuators, carried in the doll head and controlled by doll controller 20 are coupled to locations in the head of the doll that control the movement of the flexible lips of the doll's mouth, the vertical spread of the jaw, hence, lips and mouth of the doll, the smiling (upturn) or unhappy (downturn) of the right and left ends of the lips of the doll, the upward movement of the cheek of the doll while the mouth of the doll is concurrently positioned in a smile, the upward movement of the inner portion of both eyebrows of the doll as the doll eyes open in an expression of wonderment or surprise, the projection of the lower lip of the doll mouth downward and forward in a pout, the slow opening of the mouth combined with sleepily closing eyelids in the expression of a yawn, the fluttering eyelids on wakening of the doll, the eyebrow and the eyeballs of both eyes of the doll. The actuator that opens and closes the doll mouth can be called a mouth actuator; and, collectively, the actuators for the face of the doll may be said to constitute a facial actuator. The foregoing actuator arrangement is pictorially illustrated in FIG. 5A to which reference is made.
  • Head 14 contains actuators 61, 63 and 65 for controlling the shape and position of the lips 32 of the doll. The actuators are coupled to a respective output 28 of controller 20, not illustrated in this figure, and are either energized or deenergized by the computer in accordance with the computer application program that is being “run.” Actuators 61 and 63, when energized, pull the ends of the lips, which effectively changes the appearance of the mouth from a normal one to a wide one. When de-energized, the skin material, which is elastic, restores to the normal size, and the mouth returns to a normal appearance. Actuator 65, when energized by the controller spreads the upper and lower lips apart (e.g. raise the upper lip and lowers the lower jaw, hence, lower lip), effectively opening the mouth of the doll. When deenergized the lips (and jaws) are restored to a closed appearance, which is the normal appearance of the mouth. Thus, actuator 65 essentially serves as a mouth control actuator. Actuators 67 and 67 are coupled to respective outputs 28 of the controller and respectively control the left and right cheeks of the doll, which are formed in the elastic skin of the doll face. When energized by the controller, the actuators move the cheeks upward. When de-energized, the cheeks restore to the normal position. When the doll is to smile, actuators 61 and 63 spread the lips to a wide position and actuators 67 and 69 move the cheeks upward. Those movements produce a smile.
  • The actuators may be bi-directional in which case the controller program directs the movement and direction of the respective actuator. Alternatively, the actuators or some of the actuators may be a unidirectional type with a spring that moves in one direction and tensions the spring (or the elasticity of the doll skin) when energized and when deenergized the actuator is returned to the initial position by the energy stored in the spring.
  • Actuators 71 and 73 are coupled to respective outputs 28 of the controller and respectively control the right and left eyelids of the doll to thereby open (or close) the eyes of the doll. In the embodiment, the eyelid is attached to the eye of the doll, a spherical member, and covers a circumferential portion of that sphere. A shaft mounts that sphere in the eye socket in the doll face for rotational (e.g. pivotal) movement about an axis defined by the shaft. The latter actuators produce a rotational movement when energized. By pivoting the eye, the attached eyelid is pivoted counterclockwise, away from the front of the face to the rear and the eyeball is concurrently pivoted to the normal position in which the eye appears to be looking straight ahead. When energized, actuators 71 and 73 essentially cause the doll to open the doll's eyes. Actuators 75 and 77 are also coupled to a respective output of the controller and simultaneously control the right and left eyebrows of the doll, respectively. When energized by the controller, actuators 75 and 77 push (or pull) on the elastic skin of the doll face to slightly stretch the skin. Since the skin carries the image of the eyebrows, stretching the skin effectively raises the eyebrows. When de-energized, the skin contracts and resiliently restores to the normal condition, moving the eyebrows back to the normal position. When expressing surprise, the dolls mouth would be opened, the eyes would be wide open and the eyebrows would be raised, a familiar appearance of a surprised person.
  • When a simulated food article is inserted in the mouth of the doll, controller 20 indirectly detects the food article. The controller is programmed to have the doll, among other actions, taste and chew the food. For the foregoing action the controller is further programmed with an expression control module or subsidiary program, and one operational routine in that control module or program is the act of chewing. With the program giving the command for the doll to chew, the controller issues voice messages that broadcast chewing sounds from the loudspeaker 39, concurrently with issuing commands to the mouth actuator 65 and lip actuators, 61 and 63, to move the jaws or lips to simulate the facial expressions that normally accompanies chewing by a live person. Preferably, as the two lips come together and/or the lower jaw moves up and down repeatedly in a chewing cycle, if the food is potato chips or apples, the doll concurrently broadcasts a “munch” sound through the loudspeaker.
  • Reference is made to FIGS. 8 a through 8 f. The actuator arrangement that is coupled to the skin of the doll face should be capable of contorting the mouth, eyelids, cheeks and mouth to produce various expressions. Some of those expressions are illustrated, such as expressions normally associated with yawning (FIG. 8 a), sleepiness, attentive listening, feigned smile (FIG. 8 b), surprise (FIG. 8 c), unhappiness, crying (FIG. 8 d), excited speech (FIG. 8 e), chewing, and waking with fluttering eyelids (FIG. 8 f). The eyelids (FIG. 8 f) of the doll can also be fluttered by the actuators, an action that may occur when a child is awakening. The number of possible physical manipulations of the doll face appears almost infinite, limited only by the imagination of the doll designer. Typically, the foregoing expressions are accompanied by verbal statements or sound uttered by (i.e. audibly broadcast by) the doll, enhancing the impact of the expression on the player and reinforcing the meaning of the expression where there might be doubt. Those verbal statements are elsewhere herein discussed.
  • A control system that uses such individual electromagnetic actuators is more expensive at present than desired. Such a control system would necessitate a price for the resultant doll that at the present time is so high at both wholesale and retail that the doll would not sell to buyers or would have very limited distribution and low volume sales. Therefore, a specific mechanism was developed for the doll by others to whom the requirements for construction of a practical embodiment of the invention was made known; and that mechanism proved less complicated and less expensive than individual actuators for a practical embodiment. The device was also slightly less versatile. That device used two motors to move gears and/or a number of cams that could accomplish the foregoing facial movements within acceptable limits. The face of the doll is formed of a flexible, but strong, thermoplastic elastomeric material, a rubber like plastic material that presents a human-like skin in feel and appearance. That material forms a skin that fits over and overlays a stiff plastic skull and contains suitable openings in the skin to accommodate the mouth and eyes carried in the skull, and the skull houses the controller-controlled actuator mechanism. The actuator mechanism should be able to move or distort that skin so as to produce facial expressions on the doll that appear realistic. This thermoplastic elastomeric skin is injection molded to shape and when formed stretches over and onto the rigid plastic skull. Preferably, small rigid plastic connector tabs are insert molded into the interior of the of the foregoing skull skin. Those connector tabs connect certain parts of the skin to respective actuators, enabling the actuators, when operated, to more positively push or pull the skin to change the appearance of the doll face.
  • In one practical embodiment, the mechanism is achieved with two motors and a double cam mechanism later herein more fully illustrated and described. That design includes two double sided cams, each operated by a single motor that is able to rotate the cams clockwise or counterclockwise. The cams, in combination with two gearboxes operating them, move levers that attach to respective ones of the rigid insert molded plastic connectors that are molded into the inside surface of the skin material that covers the skull and forms the doll's face. By strategically configured placement and shape of the cam surfaces and the positions in the skin at which the actuator connectors are specifically molded into the skin, in order to take advantage of this actuator, enables facial movements of the strategically selected portions of the doll's face. Those facial movements achieve far more realistic human movements in the applicant's view than any doll ever made or marketed previously.
  • Using two motors, each operating independently of one another but under control of the doll controller, the number of facial expressions that can be obtained is maximized with a minimum number of motors. Each of the two motors is bi-directional and is capable of rotating either clockwise or counterclockwise (forward or reverse), the motor speed could change to rotate fast or slow and the duration of the rotation of the motor shaft occurs as directed by controller 20 (FIG. 3). The direction, speed and duration of rotation for each facial movement is determined by the doll controller and that in turn is controlled by the programming logic and game flow directing the actions.
  • The motors connect to gears and/or cams and/or levers that rotates the eyeballs of the doll to a closed position as the lower jaw of the doll is moved downwards and then immediately rotates the eyeballs to the open position and the jaw upwards. That action produces “blinking” of the eyes in a natural manner while the doll is “talking.” During the course of broadcasting voice statements the controller causes the doll eyes to blink repetitively simulating the natural eyelid movement of a living person who is in the course of speaking. The motors could also open the eyelids and eyes of the doll to a wide open position and concurrently move the eyebrows of the doll upward to simulate a facial expression of surprise; could also lower the eyelids of the doll slowly while concurrently widening the mouth of the doll to simulate an expression of yawning; or close the eyelids and simulate the appearance of sleeping. The motors could also lift the cheeks of the face of the doll upwards and backwards and the corners of the mouth of the doll upwards and back to produce a smile on the doll face in a very human-like manner or turn the corners of the mouth of the doll down and or protrude the lower lip so that the doll face gives the expression of unhappiness or even appear to pout.
  • A more detailed picture of the mechanism described in the prior paragraph is presented in FIGS. 5B through 5D to which reference is made. FIG. 5B illustrates the mechanism in an exploded view of the doll head from which the covering custom elastomeric skin is omitted. The front and rear head plates of rigid plastic, 100 and 101, join together to define an internal region in which the remaining parts illustrated in the figure are housed. The front head plate contains the various openings or windows for the eyes, cheeks, nose, and mouth, as illustrated. The two cams 103 and 104 are mounted side-by-side in the head for rotation about a common axis. Each cam is formed of a molded stiff plastic that is circular and generally flat in shape, disk-like, with the front and back faces of that disk containing at least one, and possibly two cam tracks. Each cam track is formed between a parallel pair of raised ridges located on the face of the disk, that simulates and is equivalent to a grooved track. Those cam tracks are strategically configured irregular ovoid/elliptic shaped tracks about the center of the disk. The cam track controls the somewhat lateral position of a cam follower, formed of a short round peg or lever, as a function of the rotational position of the cam. One or more such cam-track is included on each of the two faces of each cam, only one of which faces on each cam is illustrated in the figure.
  • Cam 103 controls the movement and change of position of the eyes of the doll by means of the cam-track on one face of the cam and controls the eyebrows of the doll by means of the cam-track on the opposite face of that cam. Therefore cam 103 is sometimes referred to herein as the eye and eyebrow cam. Cam 104 controls the movement and change of the position of the mouth of the doll by means of the cam-track on one face of the cam and controls the cheeks of the doll by means of the cam-track on the opposite face of the cam. Therefore cam 104 is sometimes referred to herein as the mouth and cheek cam.
  • The doll head contains a pair of eyes 105, an eyebrow actuator connector 106, and an eye lever 107. The mouth of the doll is represented by structure 108. The head further contains additional eye levers 119 and 120. Eye lever 119 is connected to and follows a cam-track in cam 103. A pair of cheek actuators 109 connects to a cheek actuator connector 110. In turn a cheek lever 111 connects to cheek connector 110.
  • A battery operated DC motor 112 drives cam 104, indirectly, through the appropriate set of three gears 113, with power being supplied to the motor under control of the controller and, hence, the program being run by the controller. The motor attaches to a motor casing 114, enclosing the gears 113 and that casing includes a rotating joint 115 driven by the lower gear in the figure that rotates the shaft that drives cam 104. The doll head also includes a motor housing 116 that houses a second DC motor, not illustrated, that drives cam 103, indirectly, through an appropriate set of gears, also not illustrated. Motor housing 116 includes a rotatable shaft 117 that is exposed on the far side. A wiper 118 is placed in abutment with shaft 117 and defines a shaft position indicator that senses position where the metal contacts of the wiper contact metal contacts that rotate with the shaft. Through appropriate wiring, not illustrated, wiper 118 reports the shaft position to the controller. That information or feedback is useful to enable the controller to properly turn the cams so that the mouth of the doll is opened wide in the case of expressing surprise or is opening slightly when the doll is to express a smile.
  • The mechanism includes eye levers 119 and 120; second and third cheek levers 121 and 122, in addition to cheek lever 111, earlier noted; and mouth levers 123 and 124. Mouth extenders 125 and 126, mouth levers 127, 128 and 129, and an additional mouth extender 130. A lever plate 131 is included to support the aforementioned levers. The strangely bent strip shown to the left of cam disk 104 constitutes the main crank 132. Casing 133 closes the rear end of the mechanism.
  • FIG. 5C shows many of the same elements as viewed from the same side as the exploded view of FIG. 5B, but assembled together, and FIG. 5D shows the elements as viewed from the opposite side of the skeletal head. As shown in FIG. 5C, cheek lever 122 and mouth lever 123 are coupled to a cam-track in cam 103. Suitably, those cam tracks are formed by parallel ridges formed on the sides of the disk-like cam that extend a small distance from the otherwise flat side of the disk. Forward motion on lever 123 pushes the cheek actuators 109 (in FIG. 5B) and those actuators, coupled to the connector tabs attached to the inside surface of the elastic doll head skin in the region of the cheeks, in turn press on the covering elastic skin to raise or lower the cheek area of the doll face. The mouth lever 123 likewise follows a cam-track in the cam disk and through the additional mouth levers 124, mouth extenders 125, 126 and 130, and mouth levers 127, 128 and 129 operates the mouth of the doll, not all of which are labeled in FIGS. 5C and 5D.
  • The foregoing mechanism basically accomplishes essentially the same functions as one that relies on the electromagnetic actuators that were described in connection with FIG. 5A, but is believed less expensive to fabricate. As one appreciates, it is not the purpose of this application to delve into the mechanical intricacies of the mechanism component of third party sources, however, ingenious the mechanical engineering may be, other than to note the availability of such mechanisms and the reasons why applicant prefers that mechanism for the embodiment of the doll. That double motor double cam mechanism is found to possess the capability of accomplishing the proper control of the “facial muscles” of the doll face to produce the contortions that are appropriate to each of the emotions above noted, such as those illustrated in FIGS. 8 a through 8 f, including, but not limited to flutter, yawn, sleepy, listen, smile, chew and speech, surprise, unhappiness, crying, excited speech.
  • The strategic configuration of the irregular elliptic cam-track on the eye movement face of cam 103 and on cam-track for the eyebrow movement on the opposite face of cam 103 create three appropriate positions for both the eye and eyebrows as cam 103 rotates forward or backwards, seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the eyes and eyebrows from one position to another. In a like manner the strategic configuration of the irregular elliptic cam-track on the cheek movement face of cam 104, and on the mouth movement opposite face of cam 104 create appropriate positions for both the cheeks and the mouth as cam 104 rotates forward (or backwards) seamlessly moving either clockwise (or counterclockwise) for various distances and at various speeds to change the cheeks and mouth from one position to another. The movement of the eyes may occur concurrently with movement of the eyebrows or independently of eyebrow movement. The mechanism is mounted in the head of the doll and the electrical inputs to that mechanism are connected to the appropriate outputs of the controller 20.
  • In applicant's view, the doll is able to produce emotional expressions that surpass anything previously attempted. The apparent ability of the actuators to achieve facial positions simulating various emotions that enable the doll to respond emotionally by means of facial expression alone or accompanied by speech in response to Conversational Interaction™ by means of voice and speech recognition when the player, whether mother or child, speaks to the doll, appears revolutionary.
  • Returning to FIG. 3, controller 20 is preferably implemented in the form of a battery operated programmable microprocessor or microcontroller, as variously termed, and associated memory, including voice ROM, and a digital-to-analog converter and appropriate input and output interface circuits. The microcontroller may also include an analog-to-digital converter and digital filters. The foregoing may be implemented in a custom semiconductor integrated circuit chip, although separate chips may be used as an alternative, all of which are known and have appeared heretofore in interactive toys. Optionally, the digital clock need not be a separate unit as earlier described, but instead, is also integrally formed on the chip. The chip's inputs are respectively connected to respective sensors (and digital clock) described and its respective outputs to the loudspeaker. The micro-controller is programmed in accordance with the foregoing description and that program, the software, is stored in another portion of non-volatile memory or ROM.
  • As described, the preferred embodiment the doll operates entirely on power supplied by batteries, that is, is a self-contained battery operated unit. When the electrical battery (or batteries) 30 are initially inserted in the battery compartment inside the doll and the power switch is set to “on”, the right hand of the doll is squeezed actuating switch 38 and the programmed set-up procedure for the doll commences. A few moments thereafter, the doll voices a yawn, contorts the doll face accordingly, and introduces herself as Amanda. The doll then asks the player to say the word “pizza.” When spoken, the word is recognized by doll. The doll confirms that the doll heard the player state “pizza” and then asks the player to say the word “spaghetti.” When spoken by the player, that word is also recognized by the doll and the doll confirms aloud that it heard the word “spaghetti.” In this embodiment, with voice recognition software analyzing the spoken words “pizza” and “spaghetti” and storing the analysis in memory as a code or pattern, the doll attaches the personage of “mom” to that analysis. In that way the child player becomes a de facto “mom.”
  • When powered on for the first time, or when the mother receives a special command from the doll, the doll interrogates the mother in the most natural way possible in order to determine (and set) the current date and time on the clock of the controller; whether the doll should observe daylight savings time, the wake-up time for the doll, and bedtime. Each time an answer is given by the child, that answer is verbally confirmed by the doll before proceeding. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll.
  • The doll first speaks a particular year, such as 2005, and then asks the player if the year that was spoken was correct. If the player answers negative, the doll tells the player to squeeze the right hand of the doll and release that hand from the squeeze only when the correct year is spoken. The doll then broadcasts the various years in serial order. When the year 2005 is spoken, the player releases the right hand and the set-up procedure continues. If the player makes a mistake, the player is able to note the mistake when the doll tells the player the year and asks for verbal confirmation from the player, a “yes” or a “no” is spoken. If negative, the doll repeats the entire procedure, until a correct year is confirmed by the player.
  • Once the year date is settled, the doll recites the name of a month and asks if that is the correct current month, requesting an answer of either “yes” or “no.” If the player states “yes,” the answer is recognized and the set up procedure next addresses the date in the month; if “no” the doll recognizes that and commences to repeat the same procedure described above for an incorrect year, but for an incorrect month instead, which includes squeezing the right hand of the doll. The doll then speaks the date number for the day of the month and requests an answer from the player. The protocol for settling upon a correct date is the same as described for the year date and the month. Once the correct date is settled, the doll makes a statement of the correct date: It is now Nov. 4, 2005, as example.
  • The program then moves to stating a wake-up time and a sleep time, which is negotiated with the player and settled upon. Alternatively, The program moves to stating a wake-up time and a sleep time and gives the player the option to choose those times or skip doing so. If the player wishes to select a wake-up and/or sleep time, then those times of day are selected in the same manner previously described herein for selection of the year, month and day.
  • The foregoing information is programmed into the programmable EEPROM (electrically programmable read only) memory during the set-up procedure and is retained even when the doll is powered off. The clock function continues running so long as the power switch to the DC power supply remains in the power-on position. When that main power switch is turned to off the clock ceases to run but the calendar information, that is time, day, month and year as of the time the power was turned off are saved in memory. When the main power switch 31 is again turned to the “on” position the doll program causes the microcontroller to run a brief set-up procedure that enables the user to reset the time from the prior time that was saved. That information may be edited or changed, however, by actuating a reset key 27, shown in FIG. 3.
  • In an alternative embodiment, the doll may verbally query the child to speak the name of the doll so that the doll may obtain the required voice sample of the child to identify the child as the mommy. Additional topics could be programmed into the controller program for interrogation, provided that the topic is relevant to a function of the doll. Many variations can be made in the set up procedure. In one alternative, the doll broadcasts a question through the speaker 39 and asks: “Say my name mommy” and may state that request twice. The doll will know when her make-believe “mommy” states the name of the doll, which in the preferred embodiment is Amanda. The voice recognition software within the controller 20 (e.g. virtual recognition process 25) analyzes the response and produces an electronic voice pattern of the “mommy.” The doll may ask again and with the additional reply at this stage, the doll recognizes the person doing the speaking as “mommy,” and the doll is programmed to reply with a statement, such as “I love you mommy.” Thereafter, the doll continues with the set up procedure as specified in the set-up program for the controller.
  • Verbal messages are broadcast from the loudspeaker 39 under control of the microcontroller by outputting the contents of various locations in the voice ROM, and applying that digital information to a digital-to-analog converter or equivalent virtual converter, not illustrated, forming a speech synthesizer. From there the sound information propagates to loudspeaker 39 and is broadcast. The digital form of the message is converted to the analog form that drives the loudspeaker and produces the desired verbalization of audible sounds, words and other voice messages.
  • The verbal messages and sounds are preferably human voices that are recorded as digital information in a portion of the ROM memory, which portion may be referred to as the voice ROM, using any standard technique. Those verbal messages, such as those earlier described, may be stored as complete sentences or, alternatively, as words and partial phrases, dependent in part on the amount of memory available or which one prefers to include. The inventor's script also contains commands for the dual motor dual cam facial expression maker for achieving the facial expressions to the face of the doll that the inventor wishes to have accompanied by those verbal messages, including the verbal statements presented in this text.
  • An exemplary listing of additional verbal statements and the positions of the four controlled facial aspects that are to accompany the respective statements, namely the eyebrows, the eyes, the mouth and the cheek is presented in APPENDIX A to this specification. Appendix A contains statements that are spoken by the doll and the accompanying changes to the components of the doll face to change the facial expression of the doll. The latter are arranged in two columns. The right hand column recites the position of the eyebrows, eyes, mouth and cheeks prior to (and following) the change, and the left hand column describes the change to those components from the foregoing default condition. As example when the doll broadcasts “Yea, Amanda loves dinner!” the eyebrows move from the normal position upward, the eyes remain at the normal configuration, that is, open, the mouth changes from the normal position to wide and the cheeks of the doll move up. Each of those actions, voice and physical action is scripted by the designer and combined to produce a life-like and natural response that one might expect from a living child.
  • In addition to short phrases, the verbal messages may include songs that are sung by the doll, music and/or special sound effects. In the preferred embodiment the doll is capable of singing several songs. It is also capable of producing the “pee-pee” and “poo-poo” sound effects of a child eliminating on the toilet, sounds which amuse and excite young children.
  • Memory chips have become relatively inexpensive, which minimizes any necessity for reducing the size of memory included in the doll. Hence, the preferred approach is to store complete sentences of spoken words. That allows for a higher quality of sound reproduction. To minimize the amount of memory required, if desired, messages may be stored as appropriate as individual words, partial phrases, and/or full expressions, and then pasted together for broadcast, e.g. concatenated. As an example, the verbal message: “I want a banana” may be parsed in separate parts and stored in different areas of the memory as “I want a” and as “banana”. Under program control, when the message for a banana is called for during the course of the program, the microcontroller selects and consecutively outputs the two sections from the memory in proper order. Other verbal requests for a pizza or Popsicle, as examples may likewise be constructed using the same initial phrase “I want a”, thereby requiring storage space for that phrase only once. The individual words and sub-phrases may be used repeatedly, allowing them to be played back in various sequences. The quality of audio reproduction using those space saving techniques is less than that available by storing complete phrases, hence the space saving technique described is less preferred than storing and reproducing complete phrases.
  • The digitized audio may be compressed using any conventional compression algorithm during the recording process to further minimize memory requirements; and the program should then include implementation of an algorithm for decompressing that compressed digitized audio as it is played back.
  • The doll may be modified to incorporate a separate clock calendar, such as a digital clock calendar chip, in lieu of the clock in the prior embodiment. In a similar manner to the way in which the controller 20 is programmed with information enabling it to keep track of the daily passing of time (i.e., a clock function), the device may, in addition to the heretofore mentioned clock function, be programmed to keep track of the weekly, monthly, and yearly passing of time (i.e., a calendar function). The calendar would be set in a similar manner to the clock, where as earlier described, in the “set the time model”, the clock would be set to the hour of the day, the minutes of the day, and whether or not it was AM or PM. Once that information was set in the clock, such as by toggling the right hand sensor in the prior embodiment, the “Set the date model” would automatically occur on the program menu, permitting the parent to also set the month by inputting a number between 1 and 12, then set the day of the month by inputting a number between 1 and 31, and then set the year by inputting the appropriate four numbers for the current year, such as 2005.
  • The set-up procedure is accomplished verbally and manually, which avoids the necessity for inclusion of a visual display, such as an LCD. The doll instructs the child to squeeze the right hand of the doll and to release that squeeze when the child hears the doll broadcast the correct number, starting with the year. The doll then starts by speaking 2005, 2006, 2007 and so on, until the child releases the squeeze of the hand of the doll, thereby selecting the year. Then the doll makes the same request for the correct month, and speaks the months, one, two, and so on (recycling if necessary each time the twelve figure is passed) until the child releases the squeeze of the right hand of the doll; then repeats the procedure for the day of the month. Thereafter the procedure continues with the hour and minutes of day through squeezing and releasing the squeeze; and after that is set requesting a verbal answer, yes or no, as to whether the time is a morning time (AM time) or evening time (PM time). The controller includes information in memory of the date that daylight savings time commences in each of the U.S., the U.K, Australia and New Zealand, all of which are English speaking countries.
  • As a specific example, the doll broadcasts an instruction to the player to squeeze the right hand of the doll, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct year is broadcast from said speaker, and, thereafter, terminating the recital of the years when the squeeze is released and places the name of the year last recited into a memory. Then, broadcasting the year last recited, and requesting the player to confirm that the stage so broadcast is correct or not by answering yes or no. If the answer is no, the immediately preceding three steps are repeated until the foregoing answer is YES.
  • Alternatively, if one squeezes the right hand of the doll and immediately release the squeeze, the doll broadcasts the first year, 2005, and with each squeeze the doll broadcasts the next successive year. That continues until one stops squeezing the doll hand for a predetermined time, whereupon the doll broadcasts the last year broadcast and queries the person if the year spoken was correct. The foregoing approach is in a sense forgiving. One can obtain the year date in either of the two described protocols or any combination thereof.
  • When the answer to the foregoing query regarding the year is yes, an instruction is broadcast to the player to again squeeze the doll hand, wherein pressure is applied to the pressure operated switch inside the hand, and to release the squeeze when the correct month of the year is broadcast. The recital of the months of the year is terminated on release of the squeeze the name of the month last recited is placed in the memory. The month of the year last recited is then broadcast in confirmation as the month selected and the player is requested to confirm the correctness by answering yes or no. If the answer is no, the immediately preceding three steps are repeated until the foregoing answer is YES. When the answer to the foregoing query is yes, an instruction is broadcast to the player to squeeze the doll hand and to release that squeeze when the correct day of the month is broadcast, followed by broadcasting the days of the month in serial order through said loudspeaker; terminating the recital of the days of the month when the squeeze is released and placing the day of the month last recited into a memory. Then broadcasting the day of the month last recited in confirmation as the day selected and requesting the player to confirm that the day so broadcast is correct or not by answering yes or no; and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES.
  • If the answer is yes, the month, day, and year stored in the memory is broadcast by the doll and the player is requested to confirm that the information broadcast is correct or not by answering yes or no; and, if the answer is no, repeating the preceding steps of setting the year, month and date from the beginning until the final answer becomes yes. Next the clock time is set. The time set up begins by broadcasting a statement to the player as to the positions of the small and large hands of a clock and requesting the player to confirm the correctness or not of that statement by answering yes or no; and, if the answer is no, broadcasting an instruction to the player to squeeze the doll hand, wherein pressure is applied to said pressure operated switch, and release the squeeze when the correct hour of day is broadcast from said speaker, followed by broadcasting the hours of the day in serial order through said loudspeaker. The recital of the hours of the day is terminated when said squeeze is released on attainment of the current hour, removing pressure from said pressure operated switch, and placing the hour of the day that was last recited into a memory. Then an instruction is broadcast to the player to squeeze the hand, and release that squeeze when the correct minutes of the day is broadcast from said speaker, and, thereafter, broadcasting the minutes of the day in five minute increments in serial order through said loudspeaker. The recital of the minutes of the hour is terminated when the player releases the squeeze on attainment of the current minute increment, removing pressure from said pressure operated switch, and placing the minute increment of the hour that was last recited into a memory. Either as the next step or when the answer to the foregoing query is yes, information is broadcast to the player on the meaning of AM and PM and querying the player if it is necessary to reset what the clock indicates as to the stage of the clock by answering yes or no. When the answer to the foregoing query is yes, broadcasting a query to the player to answer if the time of day is AM by answering yes or no; and, if the answer is yes, placing AM in said memory, but if no, then placing PM in said memory.
  • Then, the player is next queried whether the player observes daylight savings time and the player is requested to answer yes or no. The answer given by the player is recognized and stored in memory. If the player answers yes, then the controller is set to change the time at the beginning of the next event or happening of either changing the time forward one hour on the particular days of the year that daylight saving s time begins or moving the time back one hour on the particular day of the year that daylight savings time ends. As a further stage in the set up, the bedtime and wake-up times for the doll may be in a similar manner either by the hand squeeze approach or a verbal answer to a question of yes or no as to whether the player wishes to set a wake-up and bed-time for the doll.
  • In addition, readily available information regarding the specific date of each year on which a holiday, such as Easter or Thanksgiving falls when such days vary year by year, can be stored in ROM memory and be programmed into the controller. Such information can be stored for however many years into the future as desired, fifteen years in the case of the Amanda doll, as example. Similar information regarding all holidays that fall on the same date of a specific month each year, such as Christmas, can likewise be programmed into the device.
  • Play Patterns
  • The preferred embodiment features a variety of play patterns. Those play patterns are defined in the following table and an abbreviated version as used in a practical embodiment appears in APPENDIX B to this application:
    Amazing Amanda Play Patterns Categorized
    Broad Category Sub-Category
    1 Breakfast
    2 Lunch
    3 Dinner
    4 Snacks & Drinks
    5 Potty
    6 Wake Up
    7 Go to Bed Sleep
    Nap Time
    Bed Prep
    Delay Tactics
    8 Games Simon Says
    Color Game
    Shape Game
    Find Something
    Favorites
    Sing a Song
    Others
    9 Hugs & Kisses
    10 Dressing & Grooming Change from Jammies to Daytime Outfit
    Change from Daytime Outfit to Jammies
    Dress Up
    Hair Play
    Jewelry
    11 Go Out Restaurant/Fast Food
    Car Routine
    Plane
    Shopping Mall
    Grocery Store
    (Dressing)
    12 Exercise
    13 Recognition Mom
    Grandpa (Papa)
    Grandma (Nanna)
    Best Friend
    Great Grandma
    Great Grandpa
    14 Daycare
    15 Babysitter
    16 Illness
  • The play patterns can be either child-initiated or doll-initiated.
  • Child-initiated Play. In child-initiated play, the child commands the doll to initiate a particular play pattern. The easiest way to think of the play is to visualize a hierarchical menu system, triggered via voice activation.
  • The figure presented in Appendix C to this specification shows how such a menu would look. Some of the play patterns (such as play) have an additional sub-menu of choices. As example of how the menu works consider the following script:
  • Amanda: “Mommy what should we do?”
  • Mom: “Let's Play.”
  • Amanda: “What should we play, Funny Face, Animal Talk, Let's Pretend?
  • Mom: “Funny Face.”
  • The doll then begins the Funny Face game routine.
  • Doll-initiated Play. The internal clock keeps track of the date and the hours and minutes. The program of the controller uses the clock to determine the following behaviors. Eating: When told that “Its time to eat,” or when the doll decides it is “hungry” between 6 am and 10 am the doll will ask for breakfast; between 10 am and 3 PM the doll will ask for lunch; and between 3 pm and 8 PM the doll will ask for dinner. At other times the doll will ask for a snack. Each of the eating routines involves different logic and accessories, such as the simulated food stuffs elsewhere herein described.
  • Waking and Sleeping. The doll will express a desire for bed at the bedtime specified by the user in the initial set-up procedure. The programmed child-like behavior includes some behavior in anticipation. As example, the doll may speak “Mommy, it's almost time for bed.” The doll may also be programmed to wake-up from the sleep condition at a set wake-up time specified by the user.
  • Greetings and Announcements. On waking, the doll may greet her “mommy” with an appropriate time-based phrase, such as “Good afternoon mommy!” Occasionally on waking, the doll will also announce the time of day.
  • Dressing. On awakening, the doll may ask for different clothing. In the morning the doll asks for her dress. In the evening, the doll asks for her nightie.
  • Holidays and Special Days. The doll knows (e.g. is programmed to recognize) a number of holidays. On those days the doll will occasionally speak out with “Happy (holiday) mommy!” or similar phrase, where (holiday) is the given day. The doll is also programmed to anticipate the holiday, say a month in advance. In anticipation of the holiday, the doll is programmed to speak in anticipation. As example the doll may be programmed to speak “Mommy, Santa Claus is coming soon!” some weeks prior to Christmas.
  • Frequency of Activity. The doll keeps track of the frequency of performance of many behaviors and tries to avoid repeating the behavior too many times in a given interval of time. The doll also uses time intervals to create more realistic behavior. As example, if the doll has visited the potty recently and is asked to go to the potty again, the doll will speak: “Mommy I just went potty. You want me to try again.”
  • The simulated day of the doll is divided into a series of ten sequential time windows. Within each window, certain play patterns may be initiated by the doll, occurring with a particular frequency. Some play patterns occur only within a particular time window, such as lunch or waking, while other play patterns are present in many windows. Each play pattern is assigned a percentage chance of occurring likelihood at intervals within a window.
  • There are some rules to keep in mind when establishing the patterns: 1. Ordination: Certain play patterns follow other play patterns within a given window. Dressing follows waking, for example. 2. Frequency Limits: Play patterns will only occur so often. If the doll has engaged in a play pattern five minutes ago, she will (e.g. must) wait a certain period of time before initiating that play pattern again.
  • To determine which play patterns might appear in a particular time window reference may be made to the table that follows:
    Time Windows and Play Patterns
    Time Window Available Play Patterns
    1 Wake Up <Wake up sequence>
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    2 Breakfast I'm hungry, can I have breakfast?
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    3 Between Time I'm hungry, can I have a snack?*
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    4 Lunch I'm hungry, can I have lunch?
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    5 Between Time (see above)
    6 Dinner I'm hungry, can I have dinner?
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    7 Between Time (see above)
    8 Bed Prep Need to get ready for bed.
    I'm hungry, can I have a snack?*
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    9 Bed Time <Delay Tactics>
    I'm hungry, can I have a snack?*
    I'm thirsty.
    I have to go potty.
    I love you mommy.
    Let's play.
    Mommy do my hair?
    My tummy hurts.
    <Going to sleep.>
    10 Sleep <Wake response.>

    *snack chosen randomly
  • Special Days. In addition to daily, time-based, doll-initiated play, the preferred doll embodiment also features annual, time-based, doll-initiated play. Programmed with a series of special days, the doll controller enters an “anticipation mode” two weeks prior to the special day, and then a “special day mode” on the day itself. While in these modes, certain aspects of the simulated doll behavior are changed. Some of the regular play patterns will be changed to reflect the special day. For example, upon waking on the little girl's birthday she would say “Happy birthday mommy!” instead of (or in addition to) her normal waking behavior.
  • Other new play patterns may be introduced by the special holiday. For example, Amanda might say something about the special day at any point in the day. One may think of these modes as overlays on the regular time window table.
  • Transition from Child Initiated Play to Doll Initiated Play. To see how Amanda moves between child and doll initiated play patterns, consider what happens when the doll character, Amanda, awakens from sleep. When wakened from a sleeping state via her right hand being squeezed or by being hugged she asks for her mom (the little girl player). Upon hearing her mom speak her name, she asks mom what she wants to do. At this point Amanda is waiting in the child-initiated play mode. If mom does not give a response within a certain time period, Amanda will ask for something to do that is taken from the doll-initiated play patterns, depending on which time window the doll then is resident. The doll's clock is always running. If Amanda still receives no response, she goes into a sleep routine and essentially falls asleep.
  • Another way Amanda may enter doll-initiated play is after finishing a child-initiated sequence. Upon receiving no input for fifteen seconds, Amanda will suggest something from her list of options available in that time window, or she may ask again what her mom wants to do. If Amanda receives no response, she will fall asleep naturally.
  • Special Circumstances: Some special circumstances will alter the play patterns available in both child initiated and doll-initiated play.
  • The clothing that is worn by the doll will impact the “dressing” play pattern. Holiday's and birthdays will impact doll-initiated play as outlined above in the “doll-initiated play” section. Child-initiated play will be affected by changes to the birthday & holiday menu. For example:
      • Mom: What is the next holiday?
      • Amanda: Umm, Christmas mommy!
      • Mom: What do you want from Santa Claus?
      • Amanda: I want a new dress <or some other after-market accessory> from Santa Claus.
  • Power-Down/Up: If, during the regular time-triggered play of the doll, the doll fails to receive a response from her mom, say within ten minutes, the doll will go to sleep, essentially entering a power-down mode, during which only the internal clock remains powered up and continues to operate. The foregoing allows the child to place the doll in bed and refrain from responding to the doll so the doll can go to sleep, that is, enter the sleep mode. Squeezing the hand of the doll or hugging the doll causes the doll to awaken. Actuation of the hand switch or hug switch is detected by the controller 20, which is programmed to recognize the input during the power down mode as requiring restoration of the electrical power and re-commencement of the doll activity program, signifying the awakening from slumber.
  • When the doll is in the sleep mode, the internal clock in the doll continues to run, and, as recalled, during the set-up procedure, the doll may have had a definite wake-up time set by the mother (or child). That wake-up time is usually some time during the morning, say 7:00 AM. Thus should the sleep mode continue through to that wake-up time, the controller detects attainment of the wake up time, and essentially wakes up the doll, placing the doll into the mode for normal activities, elsewhere herein described.
  • The doll has and can also be placed in a quiet mode, when desired. As example, if the child is distracted by some other project, and the doll constantly nags the child but has not yet gone to sleep, the child may wish temporary silence. By squeezing the right hand (thereby operating the right hand sensor) for a predetermined period of time, suitably seven seconds, the controller is programmed to interpret the action as a command for the doll to power down. Once the controller determines that action, the controller issues instructions to broadcast a verbal message, specifically, “O.K. Mommy, I'll be quiet now,” giving the child oral feedback of the activation, and then prevents further audible broadcasts from the doll. After four minutes in that condition, the doll broadcasts the query: “Can we talk now?” If the doll does not hear a response from the child within a short interval, the doll controller 20 powers down the electronics, placing the doll in sleep mode, and waits for a wake-up command. In either the quiet mode or the sleep mode, the doll is reactivated or, as otherwise stated, is awakened simply by either squeezing the right hand of the doll (e.g. operating the right hand sensor 38) or by giving the doll a hug (e.g. operating the hug switch 33)
  • Response Variation: Depending on how much room there is for voice information, some of Amanda's responses can vary slightly. This should be done as much towards the top of her menu tree as possible. Amanda will also occasionally (rarely) disagree with her mom, suggesting another play pattern.
  • Discipline Routine: During child initiated play, Amanda may occasionally, but rarely, misbehave. The following exchange between the doll and the child can take place.
      • Amanda: No! I don't want to!
      • Mom: Amanda you are doing a bad thing.
      • Amanda: I'm sorry mommy.
      • Mom: Good girl Amanda.
  • Although the foregoing embodiment has been described in connection with the English language, it is appreciated that other languages may alternatively be employed in dolls intended for children in non-English speaking countries.
  • In other more expensive embodiments one can include additional voice ROM and store the same messages in additional languages to provide a multi-language doll. The start up programming for the user would then include an additional set up step containing a display of and requesting selection of the particular language for the doll to speak. Language selection is accomplished by toggling the left and right hand sensors in the same manner as in seeing the wake and sleep times. Such a multi-language doll may be attractive also to parents who wish their child to learn a second language.
  • In addition to the incorporation of multi-languages as larger size memories become practical for doll products, as earlier noted, the doll can store and be programmed to sing songs, accompany the speaking parts with music and/or sound effects, with or without parsing of short messages as above described, and with or without digital compression.
  • In the foregoing description of a preferred embodiment there is included voice recognition that enables the doll to recognize the individual who has initially turned power on and set up the doll as the “mother” of the doll. Indeed in alternative embodiments of the doll, the doll may be programmed to recite to the player to “say my name mommy,” in which case the doll is able to confirm that the person that states the name of the doll is the same voice print pattern as that of the pseudo-mother. In still other embodiments of the invention, the doll may be programmed to recognize multiple persons and distinguish between those persons based on different voice prints. As example, voice prints can be made of a person who is to be the grandmother, the grandfather, uncle or aunt of the doll or any other family members and the doll is able to recognize and distinguish between those persons.
  • If the invention is incorporated within a male action figure, bonding may represent camaraderie, where the child is leader of a band of heroes, or, conversely, a fried of the doll owner as leader of the enemy force. The foregoing form of bonding goes beyond bonding as being a mere attachment to a loved and needed person.
  • As those skilled in the art appreciate, the foregoing implementation is illustrative and many other forms of specific semiconductor circuits may be substituted to accomplish the described functions of my invention.
  • As further example, depending on the voice (or voices) that the doll individually recognizes, the doll can be programmed to interact with different persons in different ways, depending on the voice that the doll recognizes is speaking. That is, the memory of the doll may hold different speech messages and say something different depending on the identity of the person doing the speaking with the doll. For example, the doll could say “grandma” every time the doll would otherwise say the word “mommy.” The doll could say something entirely different.
  • The preferred embodiment of the doll is of the form of a female child under thirteen years of age and, more specifically, about two years of age, and is of the appropriate facial anatomy of that age and is dressed as such child. The doll produces the sounds, phrases, and words of the kind typically spoken by a two (2) year old female child by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak. In alternative embodiments the doll could be of the form of a human toddler and would be dressed as a toddler. The doll produces the sounds, phrases, and words of the kind typically spoken by a toddler by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • In still other embodiments the doll may be of the form of a teenage or adult female, possesses the facial appearance of a teenager and is dressed as such teen age female. The doll produces the sounds, phrases, and words of the kind typically spoken by a teen age female by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • In still other embodiments the doll is of the form of a male child, toddler, two-year old, tween, teen and so on, respectively dressed as such respective character, and/or is of the appearance of a fantasy character and is dressed as such fantasy character. The doll produces the sounds, phrases, and words of the kind expected to be spoken by a fantasy character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak. The doll in still other embodiments could be fabricated in the form of a animal character, either one that produces animal sounds or is designed to speak like a human. The doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • In still another embodiment, the doll is of the form of a male child infant or toddler, in which case the primary play is for the user to act as a caregiver for the doll. In an embodiment of the doll as an older male child below the age of thirteen, the primary play would be for the doll to serve as a best buddy for the child and do such things together, such as play games, tell stories, go on adventures, role play various characters together, and learn and play things together. In still other embodiments, the doll appears as a fantasy character and is dressed as such fantasy character and produces the sounds, phrases, and words expected of a fantasy character by means of which the fantasy character asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • In an embodiment of the doll as a teenage male or adult male, the primary play is as an action figure, a heroic or villainous, good or evil character, The young man imagines and plays-out situations, scenarios, and adventures in which he assumes the persona of the character with whom he is playing, i.e., mentally the boy “IS” the character. Male action figures and characters presently available for boys play are not interactive. Some contain mechanical mechanisms that enable movement when the boy pushes a lever on the torso, appendages, or head of the action figure. Often weapons are carried by action figures, and projectiles are often launched to emulate the firing of weapons and so forth. Depending on the size of the action figure, electronics are sometimes included that contain lights and sounds of weapons or short phrases made by the action figure.
  • For the first time ever in this embodiment, the action figure contains animatronics that moves the face into expressions of happiness, rage/anger, fierceness, puzzlement, excitement, boredom, fatigue, impatience, nobility, and so forth. Such visual expression of actual emotions in an action figure is also believed to be revolutionary. Even more revolutionary is for the action figure to respond to a user both emotionally and with words in conversation. The figure could become angry or feign anger if the wrong response occurred. An example of that anger might be that the action figure wanted to retrieve his weapon from the user, and the user would not turn the weapon over to the figure.
  • Perhaps even more revolutionary, is for an action figure to communicate with the boy as either an ally or a combatant. The voice recognition feature enables a male player to assume the role of ally or enemy, i.e., friend or foe, when play commences. The action figure then assumes the friend or foe relationship with the player and through Virtual Conversational Interactivity plays out battles, adventures, and other scenarios with the boy. The embodiment thus enables an entirely new level of play. The ability for the figure to recognize clothing, protective armor, battle gear, weapons, and so forth and interact conversationally with the player is entirely new. This enables the figure and the young player to strategize how to game or battle. It also enables the player to “command” or instruct the action figure how to battle, attack, fight, and so forth. When two brothers or friends play together, and each has an action figure of this invention, then one player and his figure might assume the role of “good guys” and the other player and his figure the role of the villains. For the first time, real camaraderie is enabled between both young players and their action figures. They are a “real team.” And the other “guys” are “real enemies.”
  • The additional technology enabling the figures to recognize either clothing or protective armor, weapons, playthings, vehicles in which it sits, and other play accessories, together with the speech recognition technology, animatronics, and programming, enables all embodiments of the invention to possess a form of artificial intelligence. That intelligence provides an additional basis for the initiation of virtual conversational interactivity and play scenarios.
  • In still another embodiment of the invention the doll is of the appearance of a real animal, insect, fish, crustacean, or other living creature or presents that living creature as a cartoon, such as, but not limited to a dog, cat, bear, bird, bunny, reptile, horse, or pony, in which the animal's visual animatronic movements of the face and/or body suggests to the user that the doll possesses human characteristics, such as feelings and emotions. People instinctively associate such human emotions with the supposed facial expression and body language of living animals and, particularly in the case of pets, believe that an animal may be excited, happy, angry, scared, sleepy, and so forth. The addition of sounds of the living animal to such an embodiment, such as, but not limited to, a bark for a dog, a chirp for a bird, a meow for a cat, and so forth, adds realism. Additionally, the tone of such sounds are varied appropriately through the doll's programming to communicate what the animal is thinking. One example is the whine of a dog to indicate that something is wrong, pain, a need for attention, or begging. Such is the present interactivity of a real animal with a human, and such interactivity is often back and forth in that a human might ask a real pet, “what's wrong?”, or “shut up!”, or “are you hungry?”, but not actually expect the pet to answer. In this case the dog may wag his tail and the human may believe that is a yes answer to the question, or the dog may continue to whine and the human may believe that the question was not the correct cause of the whining.
  • In like manner, the programming of the doll dog causes the dog to have a need, and the child (or adult) user will need to discover the need of the animal in order to satisfy that need. Such back and forth interactivity in addition to the use of the animal doll's (dog's) facial animatronics, which is anthropomorphic in nature, i.e., suggesting to the child or user that the animal has feelings and emotions. I refer to that action as Anthropomorphic Virtual Interactivity™, “AVI™”. Such AVI interactivity is back and forth in a real way that humans and communicate. Sensor devices to see if the animal has been fed, or groomed are present just as in the human embodiment. A sensor in the dog's butt will enable the puppy to know if it went potty on a newspaper, or in a cat's butt if it is the litter box, so they can be toilet trained. Voice recognition is present to bond the animal doll with it's owner or master (the child or user). Speech recognition is present to enable the animal doll to recognize what the human is saying, i.e., it enables the animal doll to be “trained,” to learn “commands”, to learn what is good behavior (“good dog” or “good kitty”) or bad behavior (“bad dog!” or “bad kitty.”). Pet clothing can be put on the animal doll and it will know that a particular coat for warmth or a collar to go for a walk has be put in place, and enable the animal doll to “know” or assume what the intention of the owner was in play. The AVI technology of the animal doll can additionally cause the animal to express feelings to the owner.
  • The doll in still other embodiments could be fabricated in the form of a animal character that is designed to speak like a human. The doll produces the sounds, phrases, and words of the kind typically verbalized by the animal or by the animatronic character by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player to speak.
  • A child learns disappointment early on in life when “mom” denies the child its request to do or be given something with a sternly spoken “no.” The child feels the hurt of that rejection. It may cry because of the humiliation the child feels from rejection or in any case give mom a sorrowful and sad look as if to say you don't love me anymore. So too when the doll of this invention makes a request of its pseudo-mother, the child player may find herself called opinion to draw the limit and say “no” to the doll's request. Hearing the word “no” through speech recognition and being programmed to understand the meaning of the child's response, the doll will put on a sorrowful and sad face. And the child having personally experienced those same emotions recognizes the humiliation that the doll should feel, gives the child affirmation of sorts that the doll is a real person, even if only for the duration of the play period, and strengthens an emotional bond that should exist between the two.
  • The versatility of RFID programming adds to the capability for doll play in additional embodiments in which the RFID sensor that is used to read at least two separate RFID tags essentially at about the same time. This capability is illustrated in connection with the potty accessory 40 in FIG. 4 to which reference is again made.
  • As recalled, doll 10 includes an RFID sensor positioned in the butt end or rear and that sensor reads the RFID tag contained in the doll potty 40, whereby the doll “knows” it is seated on the potty. One of the articles of clothing worn by the doll is a diaper and that diaper would also contain an RFID tag so as to identify the diaper to the RFID sensor in the doll butt. In that way the doll “knows” it is wearing a diaper. If the program, as earlier described in this specification, calls for the doll to request to go potty. If the doll is wearing a diaper at the time the call is made for the player to place the doll on the potty, the doll program thereby would have the doll request that the diaper be removed first so that the doll is able to go to the potty without interference from the diaper. After the diaper is removed, and the doll broadcasts a “thank you, mommy” or like message, and asks again to be placed on the potty. The doll will know if the doll is seated on the potty since the RFID sensor detects the RFID tag in the potty, and the doll will continue with the various programmed sounds of defecation earlier discussed.
  • However, if through error the player fails to remove the diaper but seats the doll on the potty wearing the diaper, the RFID sensor in the doll but will sense and read both RFID tags. The doll knows that the player neglected to remove the diaper. Accordingly, the doll will provide a message possibly correcting or berating the player, and decline to defecate until the diaper is first removed.
  • By using a larger antenna on the RFID tag sensor, the sensor can achieve wider detection coverage and is able to pick up RFID tags displaced at greater distances from the sensor. Thus the RFID tag in the diaper need not directly overlie the RFID tag in the potty when the doll is seated on the potty wearing the diaper. The sensor reads them both, and can take read them in any order called for by the program.
  • As those skilled in the art recognize, an embodiment of a doll need not include each and every feature described herein. Some embodiments may contain a full complement of features, while other embodiments may omit one or more features and contain a lesser number of features. All such embodiments are included in the invention. These various embodiments of the invention may be addressed separately and/or in combination in the claims which follow in this specification.
  • It is believed that the foregoing description of the preferred embodiments of the invention is sufficient in detail to enable one skilled in the art to make and use the invention without undue experimentation. However, it is expressly understood that the detail of the elements comprising the embodiment presented for the foregoing purpose is not intended to limit the scope of the invention in any way, in as much as equivalents to those elements and other modifications thereof, all of which come within the scope of the invention, will become apparent to those skilled in the art upon reading this specification. Thus, the invention is to be broadly construed within the full scope of the appended claims.
  • Appendix A Achieving Facial Expressions
  • Defined Expressions: Eating Charts (Dinner, Snack, Breakfast, Lunch)
  • Dinner Chart
    “Yea! Amanda loves dinner!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! Amanda loves Pizza!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! Amanda loves chicken nuggets!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! I love juice.”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done mommy!”
    Eyebrows: UP
    Eyes: NORMAL
    Mouth: NORMAL
    Cheeks: UP
    “That was the bestest dinner ever!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! I love to have my teeth brushed!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
  • Snack Chart
    <various phrases, all containing “Yea!”>
    Mouth opens wide on “Yea!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “That was the bestest snack ever!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “That was the bestest dinner ever!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
  • Breakfast Chart
    “Yea! Amanda loves breakfast!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! Amanda loves pancakes!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Surprise me mommy - I'll close my eyes.”
    Eyebrows: NORMAL
    Eyes: CLOSED
    Mouth: OPEN
    Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Surprise me mommy - I'll close my eyes.”
    Eyebrows: NORMAL
    Eyes: CLOSED
    Mouth: OPEN
    Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! I love to have my teeth brushed!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    <various responses to food surprise>
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: BLINK TWICE Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: NORMAL Cheeks: NORMAL
    <various responses to food surprise>
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: BLINK TWICE Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: NORMAL Cheeks: NORMAL
  • Lunch Chart
    “Yea! Amanda loves lunch!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “Yea! I love juice.”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: WIDE Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “All done Mommy!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
    “That was the bestest lunch ever!”
    Eyebrows: UP Eyebrows: NORMAL
    Eyes: NORMAL Eyes: NORMAL
    Mouth: NORMAL Mouth: NORMAL
    Cheeks: UP Cheeks: NORMAL
  • APPENDIX B
    Amazing Amanda Play Patterns Categorized
    Broad Category Sub-Category
    Child & Doll-Initiated
    1 Eating Breakfast
    Lunch
    Dinner
    Snack
    2 Play Songs
    Pretend (Birthday or Tea Party)
    Animal Talk Game
    Funny Face Game
    3 Potty
    4 Sleep
    5 Wake
    Doll-Initiated
    1 Hugs
    2 Sick
    3 Love
    4 Hair Play
    5 Special Calendar Days
    6 Tutorial
    Optional Accessory Play
    1 Dress Up
    2 Birthday Party*
    3 Tea Party*
    4 Stroller
    5 Tote

    *This play is separate and distinct from the “pretend” tea party and birthday party play patterns.
  • Figure US20070128979A1-20070607-P00001

Claims (49)

1. An interactive doll, said doll including a doll face and electronic means for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, said verbal tete-a-tete being accompanied with facial expressions on the face of the doll that visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete and/or constitute a visually expressed response to words spoken by said living person during said verbal tete-a-tete.
2. The interactive doll as defined in claim 1, wherein said interactive doll further comprises a self-contained battery powered doll.
3. The interactive doll as defined in claim 1 wherein said electronic means for initiating and conducting a verbal tete-a-tete includes:
a microphone;
a loudspeaker;
a processor for broadcasting verbal statements through said loudspeaker and for listening to verbal statements of said living person received through said microphone; and
a facial animator for producing an expression on said doll face that is indicative of a human emotion.
4. The interactive doll as defined in claim 1, wherein said at least one subject comprises a food selected for a meal.
5. The interactive self-contained battery powered doll defined in claim 3, wherein said electronic means further comprises:
a clock for keeping track of the time of day; and wherein said subject of a food selected for a meal is automatically selected by said electronic means based in part on the time-of-day tracked by said clock.
6. An interactive doll, said doll including a doll face and a programmed electronic processor for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, said verbal tete-a-tete being accompanied with facial expressions of the doll face that either visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete with said living person or constitute a visually expressed reply to words spoken by said living person during said verbal tete-a-tete.
7. The interactive doll as defined in claim 6, further including a microphone; and a loudspeaker; and wherein said programmed electronic processor for initiating and conducting a verbal tete-a-tete with a living person includes:
a first circuit, coupled to said microphone, for converting received verbal statements of said living person to an electrical signal that is output from said microphone;
a second circuit, coupled to said loudspeaker, for coupling electronic signals output from said electronic processor to said loudspeaker, said electronic signals output from said electronic processor being representative of verbal statements, whereby verbal statements are broadcast through said loudspeaker; and
facial actuators for producing an expression on said doll face that is indicative of a human emotion.
8. The interactive doll defined in claim 6, wherein said at least one subject comprises a food selected for a meal.
9. The interactive doll as defined in claim 8, wherein said interactive doll further comprises a self-contained battery powered doll.
10. The interactive self-contained battery powered doll defined in claim 8, wherein said electronic processor further comprises:
clock means for keeping track of the time of day; and wherein said subject of a food selected for a meal is selected by said electronic processor located in said doll based in part on the time-of-day tracked by said clock means.
11. An interactive doll that simulates chewing of a food product by a living creature comprising:
a doll, said doll comprising:
at least a torso and a head, said head including at least a mouth, and said mouth including at least a roof and a lower jaw;
said head further including a mouth control actuator for repetitively pivoting said lower jaw between a closed and open position to repetitively vary the size of said mouth opening between small and large to simulate chewing;
a radio frequency identification tag reader located in said roof of said mouth;
a memory for storing as digital data a chewing verbalization or sound produced by a person chewing;
a loudspeaker; and
electronic means coupled to said loudspeaker for reproducing and broadcasting through said loudspeaker said digital data concurrently with repetitive pivoting of said jaw between said open and closed positions by said mouth control actuator to both reproduce and broadcast said verbalization or sounds of a person that is chewing a food product and visually signify the act of chewing;
a controller;
said radio frequency identification tag reader for detecting the presence of a radio frequency identification tag of a simulated food product that is inserted into said mouth and supplying information in said tag to said controller; and wherein said controller, responsive to receiving information from said tag reader, for supplying a signal to said sound reproducer and to said mouth actuator to concurrently broadcast verbalizations of chewing from said doll and produce pivoting movement of said lower jaw to simulate chewing.
12. An interactive doll that simulates the body function of excreting body waste, comprising in combination:
a doll;
a potty;
said doll including at least a torso and a head;
said head including a face and facial control actuators for changing the demeanor of said face;
said torso including a butt, and said butt including at least a radio frequency identification tag reader;
said potty including a radio frequency identification tag for identifying said potty to a radio frequency identification tag reader;
a programmed controller;
a loudspeaker coupled to said programmed controller;
said radio frequency identification tag reader for reading said radio frequency identification tag located in said potty when said doll is seated on said potty and communicating information read from said tag to said programmed controller;
said programmed controller for broadcasting through said loudspeaker a verbal notice to a player of a need to eliminate body waste;
said programmed controller for also broadcasting through said loudspeaker a sound effect of body waste being excreted, said sound effect being broadcast within a predetermined period of time following broadcast of said notice to the player, irrespective of whether or not said doll is seated on said potty;
said programmed controller further for also broadcasting through said loudspeaker a verbal message when said torso was seated on said potty at the time of occurrence of said sound effect of body waste being excreted; and
said programmed controller still further for energizing said facial control actuators to change the demeanor of said face to a demeanor that represents a person that is upset when said torso was not seated on said potty at the time of broadcast of said sound effect of body waste being excreted.
13. The interactive doll system as defined in claim 12, wherein said programmed microcontroller is also for energizing said facial control actuators to change the demeanor of said face to present a satisfied demeanor when said torso is seated on said potty at the time of broadcast of said sound effect of waste being excreted
14. An interactive electronic doll for producing and broadcasting voiced statements to a player, said doll including eyelids that blink repetitively during the broadcast of said voiced statements to simulate the naturally occurring eyelid movement of a living person during the course of speaking.
15. The interactive electronic doll as defined in claim 14, wherein said doll includes a doll head having a face, and further comprising:
a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, and an axis of rotation;
a shaft in each of said eye sockets for supporting said spherical member in respective ones of said eye sockets;
each of said spherical members being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
said face of said doll head including a mouth, said mouth including at least a lower jaw and said lower jaw being mounted to a pivot for pivoting movement;
a programmed microcontroller;
a mouth actuator for producing pivotal movement of said lower jaw responsive to energization by said microcontroller;
eyeball actuators for respectively rotating a respective one of said spherical members about a respective axis as directed by said programmed controller, either in one rotational direction to close said eyes or in an opposite rotational direction to open said eyes;
a loudspeaker coupled to said programmed microcontroller;
a sound reproducer, coupled to said loudspeaker, for reproducing and broadcasting through said loudspeaker selected verbalizations and/or sounds stored in ROM when energized by said microcontroller;
said programmed microcontroller energizing said eyeball and eyelid combination actuators to repetitively rotate said eyeball and eyelid combination to an angular position in which said eyelid lowers to cover said eye socket to simulate a closed eye, and immediately thereafter rotates that sphere in the opposite direction to lift said eyelid to a separate angular position and expose the eye to thereby produce a blink of said eye, and for concurrently energizing said sound reproducer, whereby verbalizations are broadcast through said loudspeaker, and for concurrently energizing said mouth actuator to repetitively pivot said lower jaw up and down and forward and backward to simulate speaking.
16. The interactive electronic doll as defined in claim 15, wherein
said programmed microcontroller commands said eyeball and eyelid combination actuators to produce multiple blinks of said eyes and concurrently reproduce and broadcast said voiced statements.
17. The interactive electronic doll as defined in claim 14, wherein said doll includes a doll head having a face, and further comprising:
a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, an axis of rotation, an eyelid, said eyelid applied to a circumferential portion of said spherical member, and an eyeball applied to another circumferential portion of said spherical member;
a shaft in each of said eye sockets for supporting said spheres in respective eye sockets;
each of said eyeball and eyelid combinations being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
a programmed controller;
eyeball and eyelid combination actuators for respectively rotating a respective one of said eyeball and eyelid combinations about a respective axis as directed by said controller, said rotating being in either a clockwise direction to close said eyes or in a counterclockwise direction to open said eyes;
said programmed controller energizing said eyeball and eyelid combination actuators to repetitively rotate said sphere to a angular position in which said eyelid covers said eye socket, simulating a closed eye, and immediately rotates that sphere in the opposite direction to retract said eyelid to a separate angular position for exposing the eye, to produce a simulated blink of said eye.
18. An interactive electronic doll, said doll including a electronic controller for controlling the functions performed by said doll, a loudspeaker and a microphone;
said electronic controller including:
a speaker dependent voice recognition program for enabling recognition of a player through the voice of the player, said voice recognition program for capturing a player's voice and assigning that voice with a title or name;
said voice recognition program for
broadcasting a verbal query from said doll requesting a person in the vicinity of the doll to speak a specified word,
receiving said specified word spoken in response to said query,
producing a first voice print from said received word,
storing said first voice print in a memory, and
assigning said stored first voice print with the title or name of an individual;
said voice recognition program additionally for
generating a follow-up verbal query requesting a person to again speak said specified word;
receiving a word spoken by said person in reply to said follow-up verbal query; and
producing a second voice print of said received word; and
comparing said second voice print to said voice print stored in memory under the name or title assigned to said individual, and, if the characteristics of said second voice print are essentially the same as said voice print stored in memory, broadcasting a verbal message that includes identifying said individual by name and/or title.
19. The interactive electronic doll as defined in claim 18, wherein if the characteristics of said second voice print are not essentially the same as said voice print in memory, broadcasting a verbal message announcing that the person that produced the second voice print is not entitled to be addressed by said name and/or title of said individual assigned to said first voice print; and wherein said assigned name or title comprises a word that is or has the same meaning of the English word “mommy,” whereby to define a relationship between the doll and the apparent mother of the doll.
20. An interactive electronic doll, said doll including a programmed electronic controller for controlling the functions performed by said doll, a loudspeaker and a microphone;
said programmed electronic controller including:
a speaker dependent voice recognition program for enabling recognition of a player through the voice of the player, said voice recognition program for capturing a player's voice when voicing a specified word, assigning that voice with the title or name of an individual and recognizing the individual who again speaks the specified word when requested to do so;
a speech recognition program for enabling recognition of a set of spoken words;
a control program for initiating said speaker dependent voice recognition program to determine if the voice of the player is recognized by said speaker dependent voice recognition program, but, if not, initiating said speech recognition program and terminating said speaker dependent voice recognition program.
21. An interactive electronic doll, comprising:
a torso, a head supported on said torso, and at least one pair of appendages;
at least one of said appendages including a hand;
a loudspeaker;
a microphone;
a memory;
a programmed electronic controller for running programs stored in said memory, broadcasting verbal statements through said loudspeaker, and for listening to verbal statements of a player that are received through said microphone;
said programs defining functions carried out by said doll, including a clock program and a calendar program and a speech recognition program;
said memory further storing words and phrases in digital form;
a pressure operated switch located in said hand, wherein squeezing of said hand operates said pressure operated switch;
said programmed electronic controller monitoring the operation of said pressure operated switch to detect whether said switch is operated or not;
an interactive program for setting the initial year, month and date of said calendar program and the initial time of said clock program; said program including the steps of:
broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct year is broadcast from said speaker, and, thereafter,
terminating the recital of the years when said squeeze is released from said hand and placing the name of the year last recited into a memory;
broadcasting the year last recited, and requesting the player to confirm that the stage so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, when the answer to the foregoing query is yes, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct month of the year is broadcast from said speaker;
terminating the recital of the months of the year when said squeeze is released from said hand and placing the name of the month last recited into said memory;
broadcasting said month of the year last recited in confirmation as the month selected and requesting the player to confirm that the month so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, when the answer to the foregoing query is yes, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct day of the month is broadcast from said speaker, followed by broadcasting the days of the month through said loudspeaker in serial order;
terminating the recital of the days of the month when said squeeze is released from said hand and placing the day of the month last recited into a memory;
broadcasting said day of the month last recited in confirmation as the day selected and requesting the player to confirm that the day so broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the immediately preceding three steps until the foregoing answer is YES;
and, if the answer is yes, broadcasting the month, day, and year stored in said memory and requesting the player to confirm that the information broadcast is correct or not by answering yes or no;
and, if the answer is no, repeating the preceding steps of setting the year, month and date from the beginning until the final answer is YES;
broadcasting a statement to the player as to the positions of the small and large hands of a clock and requesting the player to confirm the correctness or not of that statement by answering yes or no;
and, if the answer is no, broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct hour of day is broadcast from said speaker, followed by broadcasting the hours of the day in serial order through said loudspeaker;
terminating the recital of the hours of the day when said squeeze is released from said hand on attainment of the current hour, removing pressure from said pressure operated switch, and placing the hour of the day that was last recited into a memory;
broadcasting an instruction to the player to squeeze said hand, wherein pressure is applied to said pressure operated switch, and to release that squeeze when the correct minutes of the day is broadcast from said speaker, and, thereafter,
broadcasting the minutes of the day in five minute increments in serial order through said loudspeaker;
terminating the recital of the minutes of the hour when said squeeze is released from said hand on attainment of the current minute closest to the five minute increment of one through twelve on the clock, removing pressure from said pressure operated switch, and placing the minute increment of the hour that was last recited into a memory;
and, either as the next step or when the answer to the foregoing query is yes, broadcasting information to the player on the meaning of AM and PM and querying the player if it is necessary to reset to the present state of the clock from one of AM or PM to the other by answering yes or no;
and, when the answer to the foregoing query is yes, broadcasting a query to the player to answer if the time of day is AM by answering yes or no;
and, if the answer is yes, placing AM in said memory, but if no, then placing PM in said memory;
then, querying the player if they wish to observe daylight savings time and requesting the player to answer yes or no.
22. The interactive electronic doll as defined in claim 21, further comprising:
storing the answer to the last named query in memory and, when the answer is yes, automatically resetting the internal clock to move forward one hour in the spring on the day that daylight savings begins, and to move the clock backwards one hour in the fall when daylight savings time ends, whereby the doll is aware of whether or not the clock is in daylight savings time, and, thereafter, automatically change the time either forward 1 hour or backwards 1 hour as is appropriate at each daylight savings time event that occurs.
23. An electronic doll comprising:
a programmed electronic microcontroller:
said programmed electronic microcontroller including programs that are run by said programmed electronic microcontroller;
a memory;
speech phrases and sound effects stored in said memory;
a loudspeaker for enabling broadcast of sounds in the immediate environment of said electronic doll;
a microphone for receiving sounds from the immediate environment for processing by said programmed electronic microcontroller;
sensors to detect the presence of a force impressed on the electronic doll and/or an object placed in contact with the electronic doll and enable said electronic microcontroller to identify the respective force or object; and
wherein said programs run by said electronic controller include at least a speech recognition program to recognize speech received via said microphone and a program for broadcasting words, phrases and sound effects through said loudspeaker;
said electronic doll further including a torso and doll head, and said doll head including a doll face; and
doll face manipulators for manipulating said doll face to produce a variety of facial expressions that portray a variety of human-like feelings, said doll face manipulators being controlled by said electronic controller to produce facial expressions, concurrently with the broadcast of words or sounds of a tenor that are not inconsistent with the emotional feelings of a living person portrayed by the respective facial expressions.
24. The electronic doll as defined in claim 23, wherein said programs run by said electronic microcontroller include a range of human emotional facial expressions and verbal responses, each of which matches the tenor of at least one of the human emotion facial expressions, which facial expressions are activated by the electronic microcontroller upon particular words or phrases spoken, or not spoken, by the player are “heard”, or “not heard”, by the electronic microcontroller by means of the microphone coupled to an analog to digital converter which sends the now digital information heard to the electronic microcontroller and which data is understood by means of the speech recognition program, and thereby causing by means of the logic programmed into the memory of the doll a particular emotional response to the reply or lack thereof of the player to a request or query that the doll made of the player;
and which particular emotional response is communicated by the doll visually, by the electronic microcontroller activating the actuators of the facial expression as set forth in the programmed logic controlling the signals of electronic microcontroller to the actuators, and audibly, by causing the electronic microcontroller to broadcasting a single or one of several verbal responses matching the tenor of the actuated human emotion facial expression, which verbal responses are stored in memory and are selected by the logic and programming of the electronic microcontroller;
wherein said reply or lack thereof of the player to a request or query that the doll made of the player is dependent on electronic microcontroller “hearing or not hearing” a particular verbal response anticipated by the programming logic and speech recognition program, but also detecting or not detecting other non-verbal such things as
the pressing or not pressing of a requested switch, such as asking for a hug,
the presence or lack thereof of a requested food being fed to the doll by the presence or lack of a RFID tag in the doll's mouth which is read or not read by the RFID reader in the roof of the doll's mouth;
being placed or not placed on the potty when the doll has expressed a need to go potty, by means of the RFID reader in the butt detecting or not detecting the RFID tag in the potty,
the requested dress or nightie by means of the RFID reader in the shoulder of the doll detecting or not detecting the correct RFID tag in the clothing or not detecting any RFID tag in any clothing in which case the doll thinks she is not dressed and becomes cold;
not being allowed to have her own way or be given what she wants, all of which and/or such things in the programming and logic within the programs in the doll, create recognizable and understandable human emotional reactions of a human child the age of the embodiment of this doll, to how the player responds verbally or physically to a verbal query or request by the doll during a tete-a-tete between the doll and the player; or by the player not physically manipulating, putting, dressing, or in any other way handling the doll; and
wherein the said programs and associated logic run by said electronic microcontroller thus enable the player to affect the emotional state expressed visually by the doll by means of the doll changing the facial expressions and expressed verbally by the doll by means of the doll broadcasting words or sound effects that by their tenor and tone further emphasize and embellish the changing emotional state of the doll through verbal or physical interaction of the player with the doll.
25. The electronic doll described in claim 23, wherein said sensors include an electrical switch that operates in response to a squeezing force exerted by the player.
26. The electronic doll described in claim 23, wherein said sensors include an electrical switch located in the doll that operates in response to the doll being hugged by the player.
27. The electronic doll described in claim 23, wherein said sensors include an electrical switch that operates when the doll is moved in position and/or turned.
28. The electronic doll described in claim 23, wherein said sensors include a magnetic switch that operates in response to placement of a magnet in proximity to the magnetic switch.
29. The electronic doll described in claim 23, wherein said sensors include a brush sensor located in said doll head that recognizes the sweeping of a hairbrush over said doll head.
30. The electronic doll described in claim 28, wherein said sensors include a magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet.
31. The electronic doll described in claim 23, wherein said sensors include a resistance measuring circuit for connection to an electrical resistor carried by a simulated food product in which the value of the electrical resistor identifies the particular food product.
32. The electronic doll described in claim 23, wherein said sensors include an RFID reader for reading RFID tags that are placed in proximity to said RFID reader.
33. The electronic doll described in claim 32, wherein said doll face includes a mouth, and wherein said RFID reader is embedded in the roof of said mouth for reading RFID tags carried on simulated food objects that are placed inside said mouth.
34. The electronic doll described in claim 32, wherein said RFID reader is embedded in the butt of said doll torso for reading an RFID tag carried on a simulated toilet accessory.
35. The electronic doll described in claim 32, wherein said RFID reader is embedded in the upper back of the doll torso for reading an RFID tag carried in an article of clothing worn on the doll.
36. The electronic doll described in claim 23, wherein said sensors include:
a first electrical switch that operates in response to a squeezing force exerted by the player; a second electrical switch located in the doll that operates in response to the doll being hugged by the player; a third electrical switch that operates when the doll is moved in position and/or turned; a first magnetic switch that operates in response to placement of a magnet in proximity to the magnetic switch; a second magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet; a first RFID reader for reading RFID tags carried by simulated food objects that are placed inside the mouth of said doll face; a second RFID reader for reading an RFID tag carried on a simulated toilet accessory; and a third RFID reader for reading an RFID tag carried in an article of clothing worn on the doll.
37. The electronic doll described in claim 23, wherein said programs includes means for enabling the doll to conduct a conversational tete-a-tete with a player on the subject of a simulated food object placed in the mouth of the doll.
38. A self-contained battery-powered interactive electronic doll in the form of a female child, said doll including:
a head; a torso; a pair of arms carried by said torso, each of said arms having a hand at the distal end of the arm;
a programmed electronic microcontroller;
stored speech phrases of the kind typically spoken by a female child; and
sound effects;
a speaker coupled to the microcontroller to enable the doll to broadcast appropriate verbalizations in the immediate environment by means of which the doll asks questions of and/or prompts the player to speak aloud one of several particular answers, word(s) or phrases that the doll anticipates the player will speak;
a microphone for picking up audible sounds and words propagating in the immediate environment of the doll and supplying those sounds and words to said electronic microcontroller;
speech recognition technology controlled by said controller that enables the doll so recognize certain spoken words received through said microphone;
a plurality of sensors carried by the doll, said sensors including:
an RFID reader embedded in the mouth of the doll for reading RFID tags that contained in simulated food objects that are placed inside the mouth of the doll to identify the respective simulated food object;
an RFID reader embedded in the back side of the doll for reading RFID tags carried on clothing worn by the doll to identify the respective article of clothing worn by the doll; and
an RFID reader embedded in the butt of the doll for reading RFID tags carried on utilitarian objects on which the doll may be seated to identify the respective object on which the doll is seated;
a first pressure operated electrical switch carried in a hand of the doll that operates in response to a squeezing force exerted on the hand by the player;
a second pressure operated electrical switch located in the doll that operates in response to the player hugging the doll; a third electrical switch that operates in response to movement of the doll in position and/or rotation;
a magnetic switch located in said doll head that operates in response to the sweeping of a hairbrush over said doll head, said hairbrush including a permanent magnet;
said head including a doll face;
said programmed electronic processor further for initiating and conducting a verbal tete-a-tete with a living person on at least on the subject of a simulated food object placed in the mouth of the doll, said verbal tete-a-tete being accompanied with facial expressions of the doll face that either visually reinforce the tenor of the words spoken by the doll during said verbal tete-a-tete with said living person or constitute a visually expressed reply to words spoken by said living person during said verbal tete-a-tete;
a doll face manipulator operated by said programmed electronic controller for manipulating the face of the doll to produce a variety of facial expressions that portray human-like feelings and to produce those expressions concurrently with the utterance by the doll of words or sounds of a tenor not inconsistent with the feelings portrayed by the respective facial expression; and
wherein said program for said doll includes program means for enabling the doll to conduct a conversational tete-a-tete with a player at least on the subject of a simulated food object placed in the mouth of the doll.
a clock-calendar for keeping track of the time of day and date; and wherein said subject of a food selected for a meal is selected by said electronic processor located in said doll based in part on the time-of-day tracked by said clock means; a pair of eye sockets in said face;
a pair of eyeball and eyelid combinations, each said combination in said pair including a spherical member, an axis of rotation, an eyelid, said eyelid applied to a circumferential portion of said spherical member, and an eyeball applied to another circumferential portion of said spherical member;
a shaft in each of said eye sockets for supporting said spherical member in respective ones of said eye sockets;
each of said eyeball and eyelid combinations being mounted to a respective shaft for rotation in respective ones of said pair of eye sockets;
said face of said doll head including a mouth, said mouth including at least a lower jaw and said lower jaw being mounted to a pivot for pivoting movement;
a mouth actuator for producing pivotal movement of said lower jaw responsive to energization by said controller; and
eyeball and eyelid combination actuators for respectively rotating a respective one of said eyeball and eyelid combinations about a respective axis as directed by said programmed controller, either in one rotational direction to close said eyes or in an opposite rotational direction to open said eyes;
said programmed controller energizing said eyeball and eyelid combination actuators to repetitively rotate said eyeball and eyelid combination to a angular position in which said eyelid covers said eye socket to simulate a closed eye, and immediately thereafter rotates that sphere in the opposite direction to retract said eyelid to a separate angular position and expose the eye to produce a blink of said eye, and for concurrently energizing said sound reproducer, whereby verbalizations are broadcast through said loudspeaker, and for concurrently energizing said mouth actuator to repetitively pivot said lower jaw up and down while said lower jaw is pivoting back and forth to simulate speaking.
39. A doll, said doll including a head and a face on said head and a microcontroller for manipulating the appearance of said face to present at least the appearance of a happy child and, alternatively the appearance of a person who is emotionally saddened, for initiating verbal requests and for recognizing if a player verbally responds to a request with a positive or negative response, wherein said microcontroller in response to recognizing a negative response from a player following the issuance of a verbal request to the player produces said emotionally saddened appearance, whereby the player feels that the player has emotionally hurt the doll.
40. An interactive doll, said doll including a doll face and electronic means for initiating and conducting a verbal tete-a-tete on at least one subject with a living person, with said living person's verbal responses to a query by said doll in the verbal tete-a-tete causing said doll to express facial expressions that are visually recognizable as human emotions and feelings communicated by particular configurations of the brows, eyes, eyelids, cheeks, mouth, and lips.
41. The interactive doll as defined in claim 40, wherein the facial expression in response to words spoken by said living person to a query by said doll create particular feelings in said human towards said doll because of said doll's visual expression of doll's feelings to said human regarding said human's responses during said verbal tete-a-tete.
42. The interactive doll as defined in claim 41, wherein the feelings in said human towards said doll as verbal tete-a-tete continues become one of bonding with said doll with mothering and nurturing feelings, which feelings increase exponentially and said human is able to create the appearance of feelings in said doll towards said human, i.e., the human's ability to see increasing feelings of love and happiness or surprise or sadness in said doll caused by said humans response to and interaction with said doll;
and in turn the human's ability to feel increasing feelings of love and happiness and concern for said doll's well being and happiness and contentment.
43. A doll and simulated food product combination that recognizes the simulated food product being fed to the doll selected from amongst multiple simulated food products, comprising in combination:
a plurality of different simulated food products, each of said food products including a respective RFID tag identifying the respective food product;
a doll torso and a doll head; said doll head including a mouth for receiving a simulated food product selected from amongst said plurality of different simulated food products and inserted in said mouth by a player;
a radio frequency identification (“RFID”) tag reader located in said doll;
a loudspeaker; and
a controller;
said RFID tag reader for wirelessly detecting the presence of an RFID tag of a simulated food product inserted into said doll mouth and receiving information from said RFID tag and supplying that information in said RFID tag to said controller;
said controller, responsive to receiving information from said RFID tag reader, for supplying output to said loudspeaker to broadcast information to the player relating to at least one food product.
44. The electronic doll described in claim 43, wherein said doll mouth includes a mouth roof, and wherein said RFID reader is embedded in said mouth roof for reading an RFID tag carried on a simulated food object that is placed inside said mouth.
45. An electronic doll comprising:
a torso and doll head;
a programmed electronic microcontroller, said programmed electronic microcontroller including programs that are run by said programmed electronic microcontroller and a memory;
at least one RFID tag sensor carried by said torso for wirelessly detecting the presence of an RFID tagged object placed in proximity to said tag sensor and receiving information therefrom and identifying that object to said electronic microcontroller; and
a plurality of articles, each respective article including a respective RFID tag identifying the respective article;
said RFID tag sensor wirelessly receiving information from any of said RFID tags placed in proximity to said tag sensor by a player.
46. The electronic doll as defined in claim 45, wherein said plurality of articles comprise a plurality of articles of clothing adapted to fit on said torso, each respective article including a respective RFID tag identifying the respective article, said RFID tag being placed in proximity to said RFID tag sensor when the article is placed on said torso by said player, whereby said electronic microcontroller receives information identifying that article.
47. The electronic doll as defined in claim 46, wherein said RFID tag sensor is embedded in the upper back of said torso of said doll.
48. The electronic doll as defined in claim 45, wherein said RFID tag sensor is embedded in said head of said doll, and wherein said articles comprise head-mounted articles, including head coverings and hair accessories.
49. The electronic doll as defined in claim 45, wherein said doll further comprises a pair of doll legs attached to said torso, and respective doll feet attached to respective ones of said doll legs; wherein said RFID tag reader is carried in at least one of said doll feet; and wherein said articles comprise shoes to fit on said respective doll feet.
US11/602,882 2005-12-07 2006-11-20 Interactive Hi-Tech doll Abandoned US20070128979A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/602,882 US20070128979A1 (en) 2005-12-07 2006-11-20 Interactive Hi-Tech doll

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US74839105P 2005-12-07 2005-12-07
US11/602,882 US20070128979A1 (en) 2005-12-07 2006-11-20 Interactive Hi-Tech doll

Publications (1)

Publication Number Publication Date
US20070128979A1 true US20070128979A1 (en) 2007-06-07

Family

ID=38119409

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/602,882 Abandoned US20070128979A1 (en) 2005-12-07 2006-11-20 Interactive Hi-Tech doll

Country Status (1)

Country Link
US (1) US20070128979A1 (en)

Cited By (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050054442A1 (en) * 2003-09-10 2005-03-10 Anderson Peter R. Gaming machine with audio synchronization feature
US20050054440A1 (en) * 2003-09-10 2005-03-10 Wms Gaming Inc. Gaming machine with audio synchronization feature
US20060068366A1 (en) * 2004-09-16 2006-03-30 Edmond Chan System for entertaining a user
US20070142965A1 (en) * 2005-12-19 2007-06-21 Chyi-Yeu Lin Robotic system for synchronously reproducing facial expression and speech and related method thereof
US20080009347A1 (en) * 2004-10-01 2008-01-10 Paul Radek Audio Markers in a Computerized Wagering Game
US20080014831A1 (en) * 2006-06-09 2008-01-17 Tim Rettberg Dolls with alterable facial features
US20080014830A1 (en) * 2006-03-24 2008-01-17 Vladimir Sosnovskiy Doll system with resonant recognition
US20080096172A1 (en) * 2006-08-03 2008-04-24 Sara Carlstead Brumfield Infant Language Acquisition Using Voice Recognition Software
US20080215183A1 (en) * 2007-03-01 2008-09-04 Ying-Tsai Chen Interactive Entertainment Robot and Method of Controlling the Same
US20080255702A1 (en) * 2007-04-13 2008-10-16 National Taiwan University Of Science & Technology Robotic system and method for controlling the same
US20080293324A1 (en) * 2007-05-22 2008-11-27 Winway Corporation Ltd. Toy doll system
US20090053971A1 (en) * 2007-08-21 2009-02-26 Man Kit Hui Interactive Doll
US20090088044A1 (en) * 2007-10-02 2009-04-02 Cheng Uei Precision Industry Co., Ltd. Interactive intellectual robotic toy and control method of the same
US20090104844A1 (en) * 2007-10-19 2009-04-23 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toys
US20090117819A1 (en) * 2007-11-07 2009-05-07 Nakamura Michael L Interactive toy
US20090197504A1 (en) * 2008-02-06 2009-08-06 Weistech Technology Co., Ltd. Doll with communication function
US20090199034A1 (en) * 2008-01-31 2009-08-06 Peter Sui Lun Fong Interactive device with time synchronization capability
US20090209170A1 (en) * 2008-02-20 2009-08-20 Wolfgang Richter Interactive doll or stuffed animal
WO2010001164A2 (en) * 2008-07-01 2010-01-07 Essential Nail Products Ltd Cosmetics training device
US20100068970A1 (en) * 2008-01-31 2010-03-18 Peter Sui Lun Fong Interactive device with local area time synchronization capbility
US20100076597A1 (en) * 2008-09-25 2010-03-25 Hon Hai Precision Industry Co., Ltd. Storytelling robot associated with actions and method therefor
US20100181943A1 (en) * 2009-01-22 2010-07-22 Phan Charlie D Sensor-model synchronized action system
US20100207734A1 (en) * 2009-02-18 2010-08-19 Darfon Electronics Corp. Information Interactive Kit and Information Interactive System Using the Same
US20100286828A1 (en) * 2009-05-11 2010-11-11 Korea Institute Of Science And Technology Lip moving device for use in robots
US20100311305A1 (en) * 2009-06-03 2010-12-09 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd Simulated eye for toy
US20110028219A1 (en) * 2009-07-29 2011-02-03 Disney Enterprises, Inc. (Burbank, Ca) System and method for playsets using tracked objects and corresponding virtual worlds
WO2011020192A1 (en) * 2009-08-20 2011-02-24 Thinking Technology Inc. Interactive talking toy with moveable and detachable body parts
US20110124264A1 (en) * 2009-11-25 2011-05-26 Garbos Jennifer R Context-based interactive plush toy
US20110143631A1 (en) * 2007-07-19 2011-06-16 Steven Lipman Interacting toys
US20110145972A1 (en) * 2009-12-21 2011-06-23 Wallace Greene System for Social Interaction around a Personal Inspirational Message Selectively Hidden in a Display Article
US20110230114A1 (en) * 2008-11-27 2011-09-22 Stellenbosch University Toy exhibiting bonding behavior
US20110237154A1 (en) * 2010-03-26 2011-09-29 Nelson Gutierrez My Best Friend Doll
US20110318988A1 (en) * 2009-03-03 2011-12-29 Joanna Birch-Jones Toy comprising a fruit or vegetable shaped body and an audio device
US20120022688A1 (en) * 2010-07-20 2012-01-26 Innvo Labs Limited Autonomous robotic life form
US20120059781A1 (en) * 2010-07-11 2012-03-08 Nam Kim Systems and Methods for Creating or Simulating Self-Awareness in a Machine
US8353767B1 (en) * 2007-07-13 2013-01-15 Ganz System and method for a virtual character in a virtual world to interact with a user
US20130072083A1 (en) * 2006-08-02 2013-03-21 Nabil N. Ghaly Interactive play set
US20130073087A1 (en) * 2011-09-20 2013-03-21 Disney Enterprises, Inc. System for controlling robotic characters to enhance photographic results
US20130178982A1 (en) * 2012-01-06 2013-07-11 Tit Shing Wong Interactive personal robotic apparatus
US20130178981A1 (en) * 2012-01-06 2013-07-11 Tit Shing Wong Interactive apparatus
US8506343B2 (en) 2010-04-30 2013-08-13 Mattel, Inc. Interactive toy doll for image capture and display
US8662954B2 (en) 2010-04-30 2014-03-04 Mattel, Inc. Toy doll for image capture and display
US8662955B1 (en) 2009-10-09 2014-03-04 Mattel, Inc. Toy figures having multiple cam-actuated moving parts
US20140073220A1 (en) * 2010-11-30 2014-03-13 Hallmark Cards, Incorporated Plush toy with non-rigid sensor for detecting deformation
US20140165814A1 (en) * 2009-07-30 2014-06-19 Carlos Alberto Ibanez Vignolo "self-playing robot guitar comprising a biodegradable skin-leathern formed carcass and a biodegradable skin-leathern formed musical plectrum, and protein / amino acids"
US20140203797A1 (en) * 2002-08-22 2014-07-24 Bodymedia, Inc. Social contact sensing
US20140256213A1 (en) * 2013-03-07 2014-09-11 Steve Copeland Soft body toy with pressure sensing
US20140273722A1 (en) * 2013-03-15 2014-09-18 Mattel, Inc. Toy with an Illuminable Movable Portion
US20140379353A1 (en) * 2013-06-21 2014-12-25 Microsoft Corporation Environmentally aware dialog policies and response generation
US20150056588A1 (en) * 2013-08-25 2015-02-26 William Halliday Bayer Electronic Health Care Coach
US20150111185A1 (en) * 2013-10-21 2015-04-23 Paul Laroche Interactive emotional communication doll
US20150118666A1 (en) * 2013-10-25 2015-04-30 Vanessa Rokjer Toilet Training Doll for Children
US20150138333A1 (en) * 2012-02-28 2015-05-21 Google Inc. Agent Interfaces for Interactive Electronics that Support Social Cues
US9072973B2 (en) 2012-05-31 2015-07-07 Build-A-Bear Workshop, Inc. Interactive play station
US20150268717A1 (en) * 2012-10-04 2015-09-24 Disney Enterprises, Inc. Interactive objects for immersive environment
US9235949B2 (en) 2011-05-09 2016-01-12 Build-A-Bear Retail Management, Inc. Point-of-sale integrated storage devices, systems for programming integrated storage devices, and methods for providing custom sounds to toys
US9421475B2 (en) 2009-11-25 2016-08-23 Hallmark Cards Incorporated Context-based interactive plush toy
US9520127B2 (en) 2014-04-29 2016-12-13 Microsoft Technology Licensing, Llc Shared hidden layer combination for speech recognition systems
US9529794B2 (en) 2014-03-27 2016-12-27 Microsoft Technology Licensing, Llc Flexible schema for language model customization
US9614724B2 (en) 2014-04-21 2017-04-04 Microsoft Technology Licensing, Llc Session-based device configuration
US9616353B2 (en) 2012-03-30 2017-04-11 Mattel, Inc. Reconfigurable doll
US9697200B2 (en) 2013-06-21 2017-07-04 Microsoft Technology Licensing, Llc Building conversational understanding systems using a toolset
US20170203219A1 (en) * 2016-01-18 2017-07-20 Ryan Dean Toy Doll with Insertable Simulated Teeth
US9717006B2 (en) 2014-06-23 2017-07-25 Microsoft Technology Licensing, Llc Device quarantine in a wireless network
US9728184B2 (en) 2013-06-18 2017-08-08 Microsoft Technology Licensing, Llc Restructuring deep neural network acoustic models
US9796095B1 (en) * 2012-08-15 2017-10-24 Hanson Robokind And Intelligent Bots, Llc System and method for controlling intelligent animated characters
US20180016806A1 (en) * 2016-07-14 2018-01-18 Anissa Bullard Animatronic Agitating Assembly
US9874914B2 (en) 2014-05-19 2018-01-23 Microsoft Technology Licensing, Llc Power management contracts for accessory devices
US9914062B1 (en) * 2016-09-12 2018-03-13 Laura Jiencke Wirelessly communicative cuddly toy
US9934817B2 (en) 2013-10-04 2018-04-03 Hallmark Cards, Incorporated System for recording, sharing, and storing audio
US20180117484A1 (en) * 2012-11-15 2018-05-03 LOL Buddies Enterprises System and Method for Providing a Toy Operable for Receiving and Selectively Vocalizing Various Electronic Communications from Authorized Parties, and For Providing a Configurable Platform Independent Interactive Infrastructure for Facilitating Optimal Utilization Thereof
US10035072B2 (en) 2016-02-02 2018-07-31 Carmella Williams Interactive food storing device
JP2018117821A (en) * 2017-01-25 2018-08-02 群馬電機株式会社 Stuffed animal for welfare nursing
CN108393893A (en) * 2018-04-19 2018-08-14 五邑大学 A kind of intelligent robot and its system based on machine perception and movement
US20180236364A1 (en) * 2017-02-21 2018-08-23 Meng-Ying Chan Interactive doll structure
US10111099B2 (en) 2014-05-12 2018-10-23 Microsoft Technology Licensing, Llc Distributing content in managed wireless distribution networks
US20180303181A1 (en) * 2017-04-21 2018-10-25 Kimberly Morgan Mechanical apparatus of human form for displaying clothing, personal accessories and makeup
US20190030424A1 (en) * 2016-07-05 2019-01-31 Fujian Blue Hat Interactive Entertainment Technology Ltd. Interactive system based on light intensity recognition
CN109478247A (en) * 2016-03-29 2019-03-15 富尔哈特机器人有限公司 Robot it is customized
US20190099666A1 (en) * 2017-09-29 2019-04-04 Shenzhen Sigma Microelectronics Co., Ltd Toy Interactive Method and Device
US20190180164A1 (en) * 2010-07-11 2019-06-13 Nam Kim Systems and methods for transforming sensory input into actions by a machine having self-awareness
CN109903757A (en) * 2017-12-08 2019-06-18 佛山市顺德区美的电热电器制造有限公司 Method of speech processing, device, computer readable storage medium and server
US20190209933A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus Child's Toy with Removable Skin
WO2019135780A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus User interface for an animatronic toy
US20190209935A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus Animatronic toy
US10360859B1 (en) * 2016-03-23 2019-07-23 Valerie J. Heilbron Eye animation device and method to show eye expression in 2D and 3D lighted displays
US10452816B2 (en) 2016-02-08 2019-10-22 Catalia Health Inc. Method and system for patient engagement
US20200129875A1 (en) * 2016-01-06 2020-04-30 Evollve, Inc. Robot having a changeable character
US10691445B2 (en) 2014-06-03 2020-06-23 Microsoft Technology Licensing, Llc Isolating a portion of an online computing service for testing
CN112190958A (en) * 2020-09-08 2021-01-08 珠海市协骏玩具有限公司 Doll with simulation effect and control method thereof
US10894216B2 (en) * 2015-08-04 2021-01-19 Luther Gunther Quick, III Dup puppet
US20210283516A1 (en) * 2018-12-14 2021-09-16 Groove X, Inc. Robot that wears clothes
US11285614B2 (en) * 2016-07-20 2022-03-29 Groove X, Inc. Autonomously acting robot that understands physical contact
WO2022165109A1 (en) * 2021-01-28 2022-08-04 Embodied, Inc. Methods and systems enabling natural language processing, understanding and generation
CN114982660A (en) * 2022-05-16 2022-09-02 江苏农牧科技职业学院 Intelligent induction pet toy
US11548147B2 (en) 2017-09-20 2023-01-10 Alibaba Group Holding Limited Method and device for robot interactions
US11557297B2 (en) 2018-11-09 2023-01-17 Embodied, Inc. Systems and methods for adaptive human-machine interaction and automatic behavioral assessment
US20230131242A1 (en) * 2021-10-26 2023-04-27 Mattel, Inc. Interactive Toy System
WO2023086870A1 (en) * 2021-11-12 2023-05-19 WeCool Toys Inc. Actuatable toy animal
US20230201730A1 (en) * 2021-12-28 2023-06-29 Anthony Blackwell Speaking Doll Assembly

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5399115A (en) * 1992-08-04 1995-03-21 Toy Biz, Inc. Blinking doll with power storage mechanism
US5802488A (en) * 1995-03-01 1998-09-01 Seiko Epson Corporation Interactive speech recognition with varying responses for time of day and environmental conditions
US6055032A (en) * 1998-02-06 2000-04-25 Oddzon, Inc. Plush toy with selectively populated display
US6149490A (en) * 1998-12-15 2000-11-21 Tiger Electronics, Ltd. Interactive toy
US20020019193A1 (en) * 2000-02-28 2002-02-14 Maggiore Albert P. Expression-varying device
US6471565B2 (en) * 1999-02-19 2002-10-29 Groupe Berchet Interactive toy
US6554679B1 (en) * 1999-01-29 2003-04-29 Playmates Toys, Inc. Interactive virtual character doll
US6816802B2 (en) * 2001-11-05 2004-11-09 Samsung Electronics Co., Ltd. Object growth control system and method
US20060068366A1 (en) * 2004-09-16 2006-03-30 Edmond Chan System for entertaining a user
US20060270312A1 (en) * 2005-05-27 2006-11-30 Maddocks Richard J Interactive animated characters
US20070093170A1 (en) * 2005-10-21 2007-04-26 Yu Zheng Interactive toy system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5399115A (en) * 1992-08-04 1995-03-21 Toy Biz, Inc. Blinking doll with power storage mechanism
US5802488A (en) * 1995-03-01 1998-09-01 Seiko Epson Corporation Interactive speech recognition with varying responses for time of day and environmental conditions
US6055032A (en) * 1998-02-06 2000-04-25 Oddzon, Inc. Plush toy with selectively populated display
US6497607B1 (en) * 1998-12-15 2002-12-24 Hasbro, Inc. Interactive toy
US6149490A (en) * 1998-12-15 2000-11-21 Tiger Electronics, Ltd. Interactive toy
US6514117B1 (en) * 1998-12-15 2003-02-04 David Mark Hampton Interactive toy
US6537128B1 (en) * 1998-12-15 2003-03-25 Hasbro, Inc. Interactive toy
US6544098B1 (en) * 1998-12-15 2003-04-08 Hasbro, Inc. Interactive toy
US6554679B1 (en) * 1999-01-29 2003-04-29 Playmates Toys, Inc. Interactive virtual character doll
US6471565B2 (en) * 1999-02-19 2002-10-29 Groupe Berchet Interactive toy
US20020019193A1 (en) * 2000-02-28 2002-02-14 Maggiore Albert P. Expression-varying device
US6816802B2 (en) * 2001-11-05 2004-11-09 Samsung Electronics Co., Ltd. Object growth control system and method
US20060068366A1 (en) * 2004-09-16 2006-03-30 Edmond Chan System for entertaining a user
US20060270312A1 (en) * 2005-05-27 2006-11-30 Maddocks Richard J Interactive animated characters
US20070093170A1 (en) * 2005-10-21 2007-04-26 Yu Zheng Interactive toy system

Cited By (143)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140203797A1 (en) * 2002-08-22 2014-07-24 Bodymedia, Inc. Social contact sensing
US20050054440A1 (en) * 2003-09-10 2005-03-10 Wms Gaming Inc. Gaming machine with audio synchronization feature
US20050054442A1 (en) * 2003-09-10 2005-03-10 Anderson Peter R. Gaming machine with audio synchronization feature
US20060068366A1 (en) * 2004-09-16 2006-03-30 Edmond Chan System for entertaining a user
US20080009347A1 (en) * 2004-10-01 2008-01-10 Paul Radek Audio Markers in a Computerized Wagering Game
US9153096B2 (en) 2004-10-01 2015-10-06 Bally Gaming Inc. Audio markers in a computerized wagering game
US20100161122A1 (en) * 2005-12-19 2010-06-24 Chyi-Yeu Lin Robotic System For Synchronously Reproducing Facial Expression And Speech And Related Method Thereof
US20070142965A1 (en) * 2005-12-19 2007-06-21 Chyi-Yeu Lin Robotic system for synchronously reproducing facial expression and speech and related method thereof
US7738997B2 (en) * 2005-12-19 2010-06-15 Chyi-Yeu Lin Robotic system for synchronously reproducing facial expression and speech and related method thereof
US7904204B2 (en) * 2005-12-19 2011-03-08 Chyi-Yeu Lin Robotic system for synchronously reproducing facial expression and speech and related method thereof
US20080014830A1 (en) * 2006-03-24 2008-01-17 Vladimir Sosnovskiy Doll system with resonant recognition
WO2007112124A3 (en) * 2006-03-24 2008-10-02 Mattel Inc Doll system with resonant recognition
US20080014831A1 (en) * 2006-06-09 2008-01-17 Tim Rettberg Dolls with alterable facial features
US7744442B2 (en) 2006-06-09 2010-06-29 Mattel, Inc. Dolls with alterable facial features
US20130072083A1 (en) * 2006-08-02 2013-03-21 Nabil N. Ghaly Interactive play set
US8845384B2 (en) * 2006-08-02 2014-09-30 Nabil N. Ghaly Interactive play set
US20080096172A1 (en) * 2006-08-03 2008-04-24 Sara Carlstead Brumfield Infant Language Acquisition Using Voice Recognition Software
US20080215183A1 (en) * 2007-03-01 2008-09-04 Ying-Tsai Chen Interactive Entertainment Robot and Method of Controlling the Same
US20080255702A1 (en) * 2007-04-13 2008-10-16 National Taiwan University Of Science & Technology Robotic system and method for controlling the same
US20080293324A1 (en) * 2007-05-22 2008-11-27 Winway Corporation Ltd. Toy doll system
US8353767B1 (en) * 2007-07-13 2013-01-15 Ganz System and method for a virtual character in a virtual world to interact with a user
US8795022B2 (en) 2007-07-19 2014-08-05 Hydrae Limited Interacting toys
US20110143631A1 (en) * 2007-07-19 2011-06-16 Steven Lipman Interacting toys
US8827761B2 (en) 2007-07-19 2014-09-09 Hydrae Limited Interacting toys
US20090053971A1 (en) * 2007-08-21 2009-02-26 Man Kit Hui Interactive Doll
US20090088044A1 (en) * 2007-10-02 2009-04-02 Cheng Uei Precision Industry Co., Ltd. Interactive intellectual robotic toy and control method of the same
US20090104844A1 (en) * 2007-10-19 2009-04-23 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toys
US7988522B2 (en) * 2007-10-19 2011-08-02 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toy
US20090117819A1 (en) * 2007-11-07 2009-05-07 Nakamura Michael L Interactive toy
US8583956B2 (en) * 2008-01-31 2013-11-12 Peter Sui Lun Fong Interactive device with local area time synchronization capbility
US8046620B2 (en) * 2008-01-31 2011-10-25 Peter Sui Lun Fong Interactive device with time synchronization capability
US20090199034A1 (en) * 2008-01-31 2009-08-06 Peter Sui Lun Fong Interactive device with time synchronization capability
US8271822B2 (en) 2008-01-31 2012-09-18 Peter Sui Lun Fong Interactive device with time synchronization capability
US20100068970A1 (en) * 2008-01-31 2010-03-18 Peter Sui Lun Fong Interactive device with local area time synchronization capbility
US9128469B2 (en) 2008-01-31 2015-09-08 Peter Sui Lun Fong Interactive device with time synchronization capability
US20090197504A1 (en) * 2008-02-06 2009-08-06 Weistech Technology Co., Ltd. Doll with communication function
US8545283B2 (en) * 2008-02-20 2013-10-01 Ident Technology Ag Interactive doll or stuffed animal
US20090209170A1 (en) * 2008-02-20 2009-08-20 Wolfgang Richter Interactive doll or stuffed animal
WO2010001164A3 (en) * 2008-07-01 2010-05-27 Essential Nail Products Ltd Cosmetics training device
WO2010001164A2 (en) * 2008-07-01 2010-01-07 Essential Nail Products Ltd Cosmetics training device
US20100076597A1 (en) * 2008-09-25 2010-03-25 Hon Hai Precision Industry Co., Ltd. Storytelling robot associated with actions and method therefor
US20110230114A1 (en) * 2008-11-27 2011-09-22 Stellenbosch University Toy exhibiting bonding behavior
US20100181943A1 (en) * 2009-01-22 2010-07-22 Phan Charlie D Sensor-model synchronized action system
US20100207734A1 (en) * 2009-02-18 2010-08-19 Darfon Electronics Corp. Information Interactive Kit and Information Interactive System Using the Same
US20110318988A1 (en) * 2009-03-03 2011-12-29 Joanna Birch-Jones Toy comprising a fruit or vegetable shaped body and an audio device
US8280553B2 (en) * 2009-05-11 2012-10-02 Korea Institute Of Science And Technology Lip moving device for use in robots
US20100286828A1 (en) * 2009-05-11 2010-11-11 Korea Institute Of Science And Technology Lip moving device for use in robots
US8210917B2 (en) * 2009-06-03 2012-07-03 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. Simulated eye for toy
US20100311305A1 (en) * 2009-06-03 2010-12-09 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd Simulated eye for toy
US9339729B2 (en) 2009-07-29 2016-05-17 Disney Enterprises, Inc. System and method for playsets using tracked objects and corresponding virtual worlds
US8939840B2 (en) * 2009-07-29 2015-01-27 Disney Enterprises, Inc. System and method for playsets using tracked objects and corresponding virtual worlds
US20110028219A1 (en) * 2009-07-29 2011-02-03 Disney Enterprises, Inc. (Burbank, Ca) System and method for playsets using tracked objects and corresponding virtual worlds
US20140165814A1 (en) * 2009-07-30 2014-06-19 Carlos Alberto Ibanez Vignolo "self-playing robot guitar comprising a biodegradable skin-leathern formed carcass and a biodegradable skin-leathern formed musical plectrum, and protein / amino acids"
CN102548624A (en) * 2009-08-20 2012-07-04 思想科技有限公司 Interactive talking toy with moveable and detachable body parts
US8684786B2 (en) 2009-08-20 2014-04-01 Thinking Technology Inc. Interactive talking toy with moveable and detachable body parts
WO2011020192A1 (en) * 2009-08-20 2011-02-24 Thinking Technology Inc. Interactive talking toy with moveable and detachable body parts
US8662955B1 (en) 2009-10-09 2014-03-04 Mattel, Inc. Toy figures having multiple cam-actuated moving parts
US20110124264A1 (en) * 2009-11-25 2011-05-26 Garbos Jennifer R Context-based interactive plush toy
US8568189B2 (en) 2009-11-25 2013-10-29 Hallmark Cards, Incorporated Context-based interactive plush toy
US9421475B2 (en) 2009-11-25 2016-08-23 Hallmark Cards Incorporated Context-based interactive plush toy
US20110223827A1 (en) * 2009-11-25 2011-09-15 Garbos Jennifer R Context-based interactive plush toy
US8911277B2 (en) 2009-11-25 2014-12-16 Hallmark Cards, Incorporated Context-based interactive plush toy
US20110145972A1 (en) * 2009-12-21 2011-06-23 Wallace Greene System for Social Interaction around a Personal Inspirational Message Selectively Hidden in a Display Article
US20110237154A1 (en) * 2010-03-26 2011-09-29 Nelson Gutierrez My Best Friend Doll
US8662954B2 (en) 2010-04-30 2014-03-04 Mattel, Inc. Toy doll for image capture and display
US8506343B2 (en) 2010-04-30 2013-08-13 Mattel, Inc. Interactive toy doll for image capture and display
US20190180164A1 (en) * 2010-07-11 2019-06-13 Nam Kim Systems and methods for transforming sensory input into actions by a machine having self-awareness
US20120059781A1 (en) * 2010-07-11 2012-03-08 Nam Kim Systems and Methods for Creating or Simulating Self-Awareness in a Machine
US8483873B2 (en) * 2010-07-20 2013-07-09 Innvo Labs Limited Autonomous robotic life form
US20120022688A1 (en) * 2010-07-20 2012-01-26 Innvo Labs Limited Autonomous robotic life form
US8926393B2 (en) * 2010-11-30 2015-01-06 Hallmark Cards, Incorporated Plush toy with non-rigid sensor for detecting deformation
US9573070B2 (en) * 2010-11-30 2017-02-21 Hallmark Cards, Incorporated Plush toy with non-rigid sensor for detecting deformation
US20150087198A1 (en) * 2010-11-30 2015-03-26 Hallmark Cards, Incorporated Plush toy with non-rigid sensor for detecting deformation
US20140073220A1 (en) * 2010-11-30 2014-03-13 Hallmark Cards, Incorporated Plush toy with non-rigid sensor for detecting deformation
US9235949B2 (en) 2011-05-09 2016-01-12 Build-A-Bear Retail Management, Inc. Point-of-sale integrated storage devices, systems for programming integrated storage devices, and methods for providing custom sounds to toys
US20130073087A1 (en) * 2011-09-20 2013-03-21 Disney Enterprises, Inc. System for controlling robotic characters to enhance photographic results
US9656392B2 (en) * 2011-09-20 2017-05-23 Disney Enterprises, Inc. System for controlling robotic characters to enhance photographic results
US9079113B2 (en) * 2012-01-06 2015-07-14 J. T. Labs Limited Interactive personal robotic apparatus
US9092021B2 (en) * 2012-01-06 2015-07-28 J. T. Labs Limited Interactive apparatus
US20130178981A1 (en) * 2012-01-06 2013-07-11 Tit Shing Wong Interactive apparatus
US20130178982A1 (en) * 2012-01-06 2013-07-11 Tit Shing Wong Interactive personal robotic apparatus
US20150138333A1 (en) * 2012-02-28 2015-05-21 Google Inc. Agent Interfaces for Interactive Electronics that Support Social Cues
US9616353B2 (en) 2012-03-30 2017-04-11 Mattel, Inc. Reconfigurable doll
US9072973B2 (en) 2012-05-31 2015-07-07 Build-A-Bear Workshop, Inc. Interactive play station
US9796095B1 (en) * 2012-08-15 2017-10-24 Hanson Robokind And Intelligent Bots, Llc System and method for controlling intelligent animated characters
US20150268717A1 (en) * 2012-10-04 2015-09-24 Disney Enterprises, Inc. Interactive objects for immersive environment
US10067557B2 (en) * 2012-10-04 2018-09-04 Disney Enterprises, Inc. Interactive objects for immersive environment
US10653969B1 (en) * 2012-11-15 2020-05-19 Shana Lee McCart-Pollak System and method for providing a toy operable for receiving and selectively vocalizing various electronic communications from authorized parties, and for providing a configurable platform independent interactive infrastructure for facilitating optimal utilization thereof
US20180117484A1 (en) * 2012-11-15 2018-05-03 LOL Buddies Enterprises System and Method for Providing a Toy Operable for Receiving and Selectively Vocalizing Various Electronic Communications from Authorized Parties, and For Providing a Configurable Platform Independent Interactive Infrastructure for Facilitating Optimal Utilization Thereof
US11020680B2 (en) * 2012-11-15 2021-06-01 Shana Lee McCart-Pollak System and method for providing a toy operable for receiving and selectively vocalizing various electronic communications from authorized parties, and for providing a configurable platform independent interactive infrastructure for facilitating optimal utilization thereof
US20140256213A1 (en) * 2013-03-07 2014-09-11 Steve Copeland Soft body toy with pressure sensing
US20140273722A1 (en) * 2013-03-15 2014-09-18 Mattel, Inc. Toy with an Illuminable Movable Portion
US10350505B2 (en) * 2013-03-15 2019-07-16 Mattel, Inc. Toy with an illuminable movable portion
US9728184B2 (en) 2013-06-18 2017-08-08 Microsoft Technology Licensing, Llc Restructuring deep neural network acoustic models
US9589565B2 (en) * 2013-06-21 2017-03-07 Microsoft Technology Licensing, Llc Environmentally aware dialog policies and response generation
US9697200B2 (en) 2013-06-21 2017-07-04 Microsoft Technology Licensing, Llc Building conversational understanding systems using a toolset
US10572602B2 (en) 2013-06-21 2020-02-25 Microsoft Technology Licensing, Llc Building conversational understanding systems using a toolset
US10304448B2 (en) 2013-06-21 2019-05-28 Microsoft Technology Licensing, Llc Environmentally aware dialog policies and response generation
US20140379353A1 (en) * 2013-06-21 2014-12-25 Microsoft Corporation Environmentally aware dialog policies and response generation
US20150056588A1 (en) * 2013-08-25 2015-02-26 William Halliday Bayer Electronic Health Care Coach
US9934817B2 (en) 2013-10-04 2018-04-03 Hallmark Cards, Incorporated System for recording, sharing, and storing audio
US20150111185A1 (en) * 2013-10-21 2015-04-23 Paul Laroche Interactive emotional communication doll
US20150118666A1 (en) * 2013-10-25 2015-04-30 Vanessa Rokjer Toilet Training Doll for Children
US10497367B2 (en) 2014-03-27 2019-12-03 Microsoft Technology Licensing, Llc Flexible schema for language model customization
US9529794B2 (en) 2014-03-27 2016-12-27 Microsoft Technology Licensing, Llc Flexible schema for language model customization
US9614724B2 (en) 2014-04-21 2017-04-04 Microsoft Technology Licensing, Llc Session-based device configuration
US9520127B2 (en) 2014-04-29 2016-12-13 Microsoft Technology Licensing, Llc Shared hidden layer combination for speech recognition systems
US10111099B2 (en) 2014-05-12 2018-10-23 Microsoft Technology Licensing, Llc Distributing content in managed wireless distribution networks
US9874914B2 (en) 2014-05-19 2018-01-23 Microsoft Technology Licensing, Llc Power management contracts for accessory devices
US10691445B2 (en) 2014-06-03 2020-06-23 Microsoft Technology Licensing, Llc Isolating a portion of an online computing service for testing
US9717006B2 (en) 2014-06-23 2017-07-25 Microsoft Technology Licensing, Llc Device quarantine in a wireless network
US10894216B2 (en) * 2015-08-04 2021-01-19 Luther Gunther Quick, III Dup puppet
US20200129875A1 (en) * 2016-01-06 2020-04-30 Evollve, Inc. Robot having a changeable character
US11529567B2 (en) * 2016-01-06 2022-12-20 Evollve, Inc. Robot having a changeable character
US20170203219A1 (en) * 2016-01-18 2017-07-20 Ryan Dean Toy Doll with Insertable Simulated Teeth
US10035072B2 (en) 2016-02-02 2018-07-31 Carmella Williams Interactive food storing device
US10452816B2 (en) 2016-02-08 2019-10-22 Catalia Health Inc. Method and system for patient engagement
US10360859B1 (en) * 2016-03-23 2019-07-23 Valerie J. Heilbron Eye animation device and method to show eye expression in 2D and 3D lighted displays
CN109478247A (en) * 2016-03-29 2019-03-15 富尔哈特机器人有限公司 Robot it is customized
US10512836B2 (en) * 2016-07-05 2019-12-24 Fujian Blue Hat Interactive Entertainment Technology Ltd. Interactive system based on light intensity recognition
US20190030424A1 (en) * 2016-07-05 2019-01-31 Fujian Blue Hat Interactive Entertainment Technology Ltd. Interactive system based on light intensity recognition
US20180016806A1 (en) * 2016-07-14 2018-01-18 Anissa Bullard Animatronic Agitating Assembly
US11285614B2 (en) * 2016-07-20 2022-03-29 Groove X, Inc. Autonomously acting robot that understands physical contact
US9914062B1 (en) * 2016-09-12 2018-03-13 Laura Jiencke Wirelessly communicative cuddly toy
JP2018117821A (en) * 2017-01-25 2018-08-02 群馬電機株式会社 Stuffed animal for welfare nursing
US20180236364A1 (en) * 2017-02-21 2018-08-23 Meng-Ying Chan Interactive doll structure
US20180303181A1 (en) * 2017-04-21 2018-10-25 Kimberly Morgan Mechanical apparatus of human form for displaying clothing, personal accessories and makeup
US11548147B2 (en) 2017-09-20 2023-01-10 Alibaba Group Holding Limited Method and device for robot interactions
US10596452B2 (en) * 2017-09-29 2020-03-24 Shenzhen Sigma Microelectronics Co., Ltd. Toy interactive method and device
US20190099666A1 (en) * 2017-09-29 2019-04-04 Shenzhen Sigma Microelectronics Co., Ltd Toy Interactive Method and Device
CN109903757A (en) * 2017-12-08 2019-06-18 佛山市顺德区美的电热电器制造有限公司 Method of speech processing, device, computer readable storage medium and server
US20190209935A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus Animatronic toy
US20190209933A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus Child's Toy with Removable Skin
WO2019135780A1 (en) * 2018-01-05 2019-07-11 American Family Life Assurance Company Of Columbus User interface for an animatronic toy
CN108393893A (en) * 2018-04-19 2018-08-14 五邑大学 A kind of intelligent robot and its system based on machine perception and movement
US11557297B2 (en) 2018-11-09 2023-01-17 Embodied, Inc. Systems and methods for adaptive human-machine interaction and automatic behavioral assessment
US20210283516A1 (en) * 2018-12-14 2021-09-16 Groove X, Inc. Robot that wears clothes
CN112190958A (en) * 2020-09-08 2021-01-08 珠海市协骏玩具有限公司 Doll with simulation effect and control method thereof
WO2022165109A1 (en) * 2021-01-28 2022-08-04 Embodied, Inc. Methods and systems enabling natural language processing, understanding and generation
US20230131242A1 (en) * 2021-10-26 2023-04-27 Mattel, Inc. Interactive Toy System
WO2023086870A1 (en) * 2021-11-12 2023-05-19 WeCool Toys Inc. Actuatable toy animal
US20230201730A1 (en) * 2021-12-28 2023-06-29 Anthony Blackwell Speaking Doll Assembly
CN114982660A (en) * 2022-05-16 2022-09-02 江苏农牧科技职业学院 Intelligent induction pet toy

Similar Documents

Publication Publication Date Title
US20070128979A1 (en) Interactive Hi-Tech doll
US6554679B1 (en) Interactive virtual character doll
Turkle et al. Encounters with kismet and cog: Children respond to relational artifacts
Acredolo et al. Baby signs: How to talk with your baby before your baby can talk
Turkle et al. Relational artifacts with children and elders: the complexities of cybercompanionship
US8483873B2 (en) Autonomous robotic life form
Druin et al. Robots for kids: exploring new technologies for learning
Breazeal Designing sociable robots
US6227931B1 (en) Electronic interactive play environment for toy characters
Draper Out of my mind
JP2002532169A (en) Interactive toys
JP2003071763A (en) Leg type mobile robot
Acredolo et al. Baby minds: Brain-building games your baby will love
JP2002205291A (en) Leg type robot and movement control method for leg type robot, as well as recording medium
Pally The reflective parent: How to do less and relate more with your kids
Fein et al. The activity kit for babies and toddlers at risk: How to use everyday routines to build social and communication skills
Briant Baby sign language basics: Early communication for hearing babies and toddlers
Plowman “Hey, hey, hey! It’s time to play”: Children’s interactions with smart toys
Barker The Mighty Toddler
Chappell How to Raise a Chatterbox: A Parents’ Guide to Speech and Language Development
Blakemore et al. Baby read-aloud basics: Fun and interactive ways to help your little one discover the world of words
Warburton Baby Sign Language: For Hearing Babies
Silberg 125 Brain Games for Babies: Simple Games to Promote Early Brain Development
Faull et al. Amazing Minds: The Science of Nurturing Your Child's Developing Mind with Games, Activities and More
Lathey et al. Small Talk: How to Develop Your Child’s Language Skills from Birth to Age Four

Legal Events

Date Code Title Description
AS Assignment

Owner name: J. SHACKELFORD ASSOCIATES LLC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHACKELFORD, JUDITH ANN;ANDERSON, ADAM MICHAEL;HELLER, JASON GENE;REEL/FRAME:018632/0184

Effective date: 20061115

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION