US20060041332A1 - Robot apparatus and control method therefor, and robot character discriminating method - Google Patents

Robot apparatus and control method therefor, and robot character discriminating method Download PDF

Info

Publication number
US20060041332A1
US20060041332A1 US11/244,341 US24434105A US2006041332A1 US 20060041332 A1 US20060041332 A1 US 20060041332A1 US 24434105 A US24434105 A US 24434105A US 2006041332 A1 US2006041332 A1 US 2006041332A1
Authority
US
United States
Prior art keywords
action
robot apparatus
emotion
character
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/244,341
Inventor
Kohtaro Sabe
Rika Hasegawa
Makoto Inoue
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP34120699A external-priority patent/JP2001157977A/en
Priority claimed from JP34137599A external-priority patent/JP2001157983A/en
Priority claimed from JP34120799A external-priority patent/JP2001157978A/en
Application filed by Individual filed Critical Individual
Priority to US11/244,341 priority Critical patent/US20060041332A1/en
Publication of US20060041332A1 publication Critical patent/US20060041332A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/008Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39162Learn social rules, greedy robots become non-greedy, adapt to other robots

Definitions

  • the present invention relates to a robot apparatus and a control method therefor, and a robot character discriminating method, and more particularly to, for example, a pet robot.
  • a four-legged walking type pet robot has been proposed and developed by the applicant of the present application.
  • Such a pet robot has a shape similar to a dog and a cat which are raised in a general home, and is adapted to be able to autonomously act in response to approach from the user such as “patting” or “petting,” a surrounding environment or the like.
  • this pet robot is mounted with a learning function to change revelation probability of corresponding action on the basis of approaches such as “patting” and “petting” from the user, a growth function for stepwise changing the degree of difficulty of the action and the level of complicatedness on the basis of accumulation, elapsed time or the like of the approaches concerned, or the like to thereby provide high marketability and amusement characteristics as the “pet robot.”
  • the behavioral model is switched to a new behavioral model every time the pet robot “grows,” and therefore, the pet robot starts a new action for each “growth” as if the character was suddenly changed, and the result of the “learning” until then is to be canceled.
  • such a pet robot has a model of emotion as well as a mechanism which determines an action by himself and a feature of the pet robot which can be called as a character is changed under no influence due to another robot.
  • the character of an animal is formed under influences due to surrounding environments and when two pets are bred together, for example, existence of a pet influences largely on forming of a character of the other pet in actual circumstances.
  • the present invention has been achieved in consideration of the above described points, and is aimed to propose: firstly, a robot apparatus and control method therefor which are capable of improving the entertainment characteristics; secondly, a robot apparatus and control method therefor which are capable of improving the amusement characteristics; thirdly, a robot apparatus and a robot character discriminating method which are capable of forming the character more real.
  • a robot apparatus is provided with: memory means for storing behavioral model; and action generating means for generating actions by the use of partial or full state space of the behavioral model, and the action generating means is caused to change state space to be used for action generation, of the behavioral models while expanding or reducing the state space.
  • this robot apparatus is capable of reducing discontinuity in action output before and after change in state space to be used for action generation because the state space to be used for action generation continuously changes. Thereby, output actions can be changed smoothly and naturally, thus making it possible to realize a robot apparatus which improves the entertainment characteristics.
  • transition to a predetermined node in the behavioral model is described as transition to a virtual node consisting of imaginary nodes, a predetermined first node group is allocated to the virtual node concerned, and change means for changing a node group to be allocated to the virtual node is provided.
  • a control method for the robot apparatus is provided with a first step for generating action by the use of partial or full state space of the behavioral model, and a second step for changing state space to be used for action generation, of the behavioral models while expanding or reducing the state space.
  • transition to a predetermined node in the behavioral model is described as transition to a virtual node consisting of imaginary nodes, and there are provided a first step for allocating a predetermined node group to the virtual node concerned, and a second step for changing the node group to be allocated to the virtual node.
  • a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model which are sequentially updated in accordance with prescribed conditions, the robot apparatus comprising restricting means for restricting the number of the emotional behaviors or the desires used for generating the action so as to increase or decrease them stepwise.
  • a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model
  • the robot apparatus comprising emotional behavior and/or desire updating means for sequentially updating the parameter value of each emotional behavior and/or the parameter value of each desire, depending on corresponding sensitivity individually set to each emotional behavior and/or each desire, on the basis of externally applied stimulation and/or the lapse of time; and sensitivity updating means for evaluating an environment and respectively updating the sensitivity corresponding to each emotional behavior and/or each desire on the basis of the evaluated result. Consequently, according to this robot apparatus, the sensitivity of each emotional behavior and/or each desire can be optimized relative to the environment.
  • a control method for a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model which are sequentially updated in accordance with prescribed conditions
  • the control method for the robot apparatus comprising: a fist step of restricting the number of the emotional behaviors and/or the desires used for generating the action during an initial time and a second step of increasing or decreasing stepwise the number of the emotional behaviors and/or the desires used for generating the action.
  • a control method for a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model
  • the control method for a robot apparatus comprising a first step of updating the parameter value of each emotional behavior and/or the parameter value of each desire, depending on corresponding sensitivity individually set to each emotional behavior and/or each desire, on the basis of externally applied stimulation and the lapse of time; and a second step of evaluating an environment and respectively updating the sensitivity corresponding to each emotional behavior and/or each desire on the basis of the evaluated result. Therefore, according to this control method for a robot apparatus, the sensitivity of each emotional behavior and/or each desire can be optimized relative to the environment.
  • the robot apparatus comprises detecting means for detecting an output from another robot and character discriminating means which discriminates a character of the other robot apparatus on the basis of a result detected by the detecting means.
  • detecting means for detecting an output from another robot and character discriminating means which discriminates a character of the other robot apparatus on the basis of a result detected by the detecting means.
  • the robot apparatus discriminates the character of the other robot apparatus on the basis of the detection result of the output from the other robot apparatus, which is detected by the detecting means, with the character discriminating means.
  • the robot apparatus can change own character based on the discrimination result of the character of the other robot apparatus, thus making it possible to realize a robot apparatus which can form its character more real.
  • the character discriminating method for robot apparatus detects an output from a robot apparatus and discriminates a character of the above described robot apparatus on the basis of a detection result. Accordingly, using the character discriminating method for robot apparatus can realize the character discriminating method for robot apparatus which can change own character on the basis of the discrimination result of the character of another robot apparatus.
  • FIG. 1 is a perspective view showing an external appearance configuration of a pet robot according to a first and second embodiments.
  • FIG. 2 is a block diagram showing a circuit configuration of the pet robot.
  • FIG. 3 is a conceptual view showing software configuration of a control program.
  • FIG. 4 is a conceptual view showing software configuration of a middleware layer.
  • FIG. 5 is a conceptual view showing software configuration of an application layer.
  • FIG. 6 is a conceptual view for explaining a behavioral model library.
  • FIG. 7 is a schematic diagram showing probability automaton.
  • FIG. 8 is a chart showing a state transition table.
  • FIG. 9 is a conceptual view showing detailed configuration of the behavioral model library.
  • FIG. 10 is a conceptual view showing a growth model of the pet robot.
  • FIG. 11 is a conceptual view for explaining acquisition and forgetfulness of an action pattern along with growth.
  • FIG. 12 is a conceptual view for explaining a difference file in the first embodiment.
  • FIG. 13 is a conceptual view for explaining transition from a plurality of nodes to a starting point node of one action pattern.
  • FIG. 14 is a conceptual view for explaining utilization of a virtual node.
  • FIG. 17 is a conceptual view for explaining a difference file according to the second embodiment.
  • FIG. 18 is a perspective view showing the configuration of the external appearance of a pet robot according to a third and fourth embodiments.
  • FIG. 19 is a block diagram showing the circuit configuration of the pet robot.
  • FIG. 20 is a conceptual view showing the software configuration of a control program.
  • FIG. 21 is a conceptual view showing the software configuration of a middleware layer.
  • FIG. 22 is a conceptual view showing the software configuration of an application layer.
  • FIG. 24 is a schematic diagram showing a probability automaton.
  • FIG. 25 is a chart showing a state transition table.
  • FIG. 26 is a conceptual view showing the detailed configuration of the behavioral model library.
  • FIG. 28 is a conceptual view showing an emotion parameter file for each “growth stage”.
  • FIG. 29 is a flowchart used for explaining the growth of sensitivity and instinct.
  • FIG. 30 is a conceptual view showing an instinct parameter file for each “growth stage”.
  • FIG. 32 is a perspective view showing an embodiment of a pet robot according to the fifth embodiment.
  • FIG. 33 is a block diagram showing a circuit composition of a pet robot.
  • FIG. 34 is a schematic diagram showing data processing in a controller.
  • FIG. 35 is a schematic diagram showing data processing by an emotion and instinct model section.
  • FIG. 36 is a schematic diagram showing data processing by the emotion and instinct model section.
  • FIG. 37 is a schematic diagram showing data processing by the emotion and instinct model section.
  • FIG. 38 is a block diagram of component members for changing parameters of an emotion model in the above described pet robot.
  • FIG. 39 is a characteristic diagram showing an emotion expression ratio of a mate robot.
  • FIG. 40 is a state transition diagram of finite automaton in a behavior determination mechanism section.
  • FIG. 42 is a block diagram showing the component members for changing the parameters of the emotion model in the pet robot described in the fifth embodiment and is descriptive of another embodiment for changing the parameters of the emotion model.
  • FIG. 44 is a perspective view showing another embodiment of the pet robot described in the fifth embodiment.
  • reference numeral 1 denotes a pet robot according to the first embodiment as a whole, which is configured such that leg units 3 A to 3 D are coupled to a trunk unit 2 in front and in rear, and on both sides thereof, and a head unit 4 and a tail unit 5 are coupled to the front end and rear end of the trunk unit 2 respectively.
  • the trunk unit 2 contains a control unit 16 in which a CPU (Central Processing Unit) 10 , a DRAM (Dynamic Random Access Memory) 11 , a flash ROM (Read Only Memory) 12 , a PC (Personal Computer) card interface circuit 13 and a signal processing circuit 14 are connected to each other with an internal bus 15 , and a battery 17 as a power source for the pet robot 1 .
  • the trunk unit 2 contains an angular velocity sensor 18 , an acceleration sensor 19 or the like for detecting the direction and acceleration of the movement of the pet robot 1 .
  • each coupled portion between each leg unit 3 A to 3 D and the trunk unit 2 there are disposed actuators 25 1 to 25 n and potentiometers 26 1 to 26 n having several degrees of freedom.
  • Various sensors including the angular velocity sensor 18 , acceleration sensor 19 , touch sensor 21 , distance sensor 22 , microphone 23 , speaker 24 , and each potentiometer 26 1 to 26 n , the LEDs and the actuator 25 1 to 25 n are connected to the signal processing circuit 14 in the control unit 16 via corresponding hubs 27 1 to 27 n , and the CCD camera 20 and the battery 17 are directly connected to the signal processing circuit 14 .
  • the signal processing circuit 14 successively captures sensor data, image data and audio data to be supplied from each of the above described sensors, and successively stores these data in predetermined positions of the DRAM 11 via the internal bus 15 .
  • the signal processing circuit 14 successively captures battery residual amount data indicating the battery residual amount to be supplied from the battery 17 , together with those data to store them in a predetermined position of the DRAM 11 .
  • Each sensor data, image data, audio data and battery residual amount data, which have been stored in the DRAM 11 , will be utilized when the CPU 10 controls the operation of this pet robot 1 later.
  • the CPU 10 reads out, when power supply for the pet robot 1 is initially turned on, a control program stored in a memory card 28 loaded in the PC card slot (not shown) of the trunk unit 2 or in the flash ROM 12 , via the PC card interface circuit 13 or directly,. to store it in the DRAM 11 .
  • the CPU 10 judges self conditions and surrounding conditions, the presence or absence of any instruction and approaches from the user, and the like.
  • the CPU 10 determines a next action on the basis of this judgment result and the control program stored in the DRAM 11 , and drives necessary actuators 25 1 to 25 n on the basis of the determination result concerned to thereby move the head unit 4 left, right, up or down, wag a tail 5 A of the tail unit 5 , and drive each leg unit 3 A to 3 D for walking among others.
  • the CPU 10 produces audio data as required, and gives it to the speaker 24 through the signal processing circuit 14 as an audio signal to thereby output voice based on the audio signal concerned outward, or to light, put out or blink the LED.
  • FIG. 3 shows a software configuration of the control program in the pet robot 1 .
  • a device driver layer 30 is located at the lowest layer of this control program, and is configured by a device driver set 31 consisting of a plurality of device drivers.
  • each device driver is an object allowed to directly access hardware to be used in an ordinary computer such as the CCD camera 20 ( FIG. 2 ) and a timer, and receives an interruption from corresponding hardware for processing.
  • a robotic server object 32 is located in the upper layer of the device driver layer 30 , and is configured by: a virtual robot 33 consisting of a software group for providing an interface for accessing hardware such as, for example, various sensors and actuators 25 1 to 25 n described above; a power manager 34 consisting of a software group for managing switching or the like of the power supply; a device driver manager 35 consisting of a software group for managing other various device drivers; and a designed robot 36 consisting of a software group for managing the mechanism of the pet robot 1 .
  • a virtual robot 33 consisting of a software group for providing an interface for accessing hardware such as, for example, various sensors and actuators 25 1 to 25 n described above
  • a power manager 34 consisting of a software group for managing switching or the like of the power supply
  • a device driver manager 35 consisting of a software group for managing other various device drivers
  • a designed robot 36 consisting of a software group for managing the mechanism of the pet robot 1 .
  • the manager object 37 is configured by an object manager 38 and a service manager 39 .
  • the object manager 38 is a software group for managing start and end of each software group included in the robotic server object 32 , a middleware layer 40 and an application layer 41
  • the service manager 39 is a software group for managing connections of the objects on the basis of information on connections between the objects written in a connection file stored in the memory card 28 ( FIG. 2 ).
  • the middleware layer 40 is located in the upper layer of the robotic server object 32 , and is configured by a software group for providing this pet robot 1 with basic functions such as image processing and audio processing.
  • the application layer 41 is located in the upper layer of the middleware layer 40 and is configured by a software group for determining the actions of the pet robot 1 on the basis of results of processing obtained by processing by each software group constituting the middleware layer 40 .
  • FIG. 4 and FIG. 5 show concrete software configuration of the middleware layer 40 and the application layer 41 respectively.
  • the middleware layer 40 is, as is obvious also from FIG. 4 , configured by: a recognition system 57 having signal processing modules 50 to 55 for musical scales recognition, for distance detection, for posture detection, for the touch sensor, for movement detection and for color recognition, an input semantics converter module 56 and the like; and an output system 65 having an output semantics converter module 57 , signal processing modules 58 to 64 for posture management, for tracking, for operation playback, for walking, for falling-down and standing-up, for LED lighting and for sound reproducing, and the like.
  • a recognition system 57 having signal processing modules 50 to 55 for musical scales recognition, for distance detection, for posture detection, for the touch sensor, for movement detection and for color recognition, an input semantics converter module 56 and the like
  • an output system 65 having an output semantics converter module 57 , signal processing modules 58 to 64 for posture management, for tracking, for operation playback, for walking, for falling-down and standing-up, for LED lighting and for sound reproducing, and the like.
  • each signal processing module 50 to 55 in the recognition system 57 captures corresponding data from among sensor data, image data and audio data to be read out from the DRAM 11 ( FIG. 2 ) by a virtual robot 33 in the robotic server object 32 , and carries out predetermined processing on the basis of the data to supply the processing results to the input semantics converter module 56 .
  • the input semantics converter module 56 recognizes the self conditions and the surroundings, such as “detecting a ball,” “fell down,” “has been petted,” “has been patted,” “hearing musical scales of do-re-mi-fa,” “detecting a moving object,” or “detecting an obstacle,” and any instruction and approach from the user, and outputs the recognition results to the application layer 41 ( FIG. 2 ).
  • the application layer 41 is, as shown in FIG. 5 , configured by five modules: a behavioral model library 70 , an action switching module 71 , a learning module 72 , an emotion model 73 and an instinct model 74 .
  • the behavioral model library 70 is, as shown in FIG. 6 , provided with respectively-independent behavioral models 70 1 to 70 n , by bringing each of them into correspondence with pre-selected several conditional items respectively, such as “when the battery residual amount has got low,” “when standing up after falling down,” “when dodging an obstacle,” “when expressing an emotion,” and “when detecting a ball.”
  • these behavioral models 70 1 to 70 n determine a next action while referring to a parameter value of corresponding emotion held in the emotion model 73 , and a parameter value of a corresponding desire held in the instinct model 74 as described later, as required, to output the determination result to the action switching module 71 .
  • each behavioral model 70 1 to 70 n uses, as the technique of determining the next action, algorithm which is referred to as “probability automaton,” to probably determine to which node NODE 0 to NODE n a transition is made from one node (state) NODE 0 to NODE n as shown in FIG. 7 , on the basis of transition probabilities P 1 to P n which have been set respectively for arcs ARC 1 to ARC a1 connecting the nodes NODE 0 to NODE n .
  • each behavioral model 70 1 to 70 n has such a state transition table 80 for each of nodes NODE 0 to NODE n as shown in FIG. 8 by bringing each of them into correspondence with each of nodes NODE 0 to NODE n which form own behavioral models 70 1 to 70 n respectively.
  • a node NODE 100 represented on the state transition table 80 of FIG. 8 when recognition result “detecting a ball” has been given, the fact that the “SIZE” of the ball is within a range of “from 0 to 1000” to be given together with the recognition result, or when recognition result “detecting an obstacle” has been given, the fact that the “distance” to the obstacle to be given together with the recognition result is within a range of “from 0 to 100,” is a condition required to make a transition to another node.
  • the parameter value of any of “JOY,” “SURPRISE” and “SUDNESS,” which are held by the emotion model 73 is within a range of '50 to 100,” transition can be made to another node.
  • node NODE 100 represented in the state transition table 80 of FIG. 8
  • the recognition result that “detecting a ball” and the “SIZE” of the ball is within a range of “from 0 to 1000” has been given, it is possible to transit to “Node NODE 120 (node 120 )” at a probability of “30[%]” and at the time, the action of “ACTION 1” is to be outputted.
  • Each behavioral model 70 1 to 70 n is configured such that the nodes NODE 0 to NODE n described as such a state transition table 80 are connected in a great number, and when recognition result has been given from the input semantics converter module 56 among others, the next action is adapted to be determined by using probability through the use of the state transition table 80 for the corresponding nodes NODE 0 to NODE n to output the determination result to the action switching module 71 .
  • the action switching module 71 selects action outputted from a predetermined higher-priority behavioral model 70 1 to 70 n , and transmits a command (hereinafter, referred to as “action command”) to the effect that the action concerned should be taken to the output semantics converter 57 in the middleware layer 40 .
  • a behavioral model 70 1 to 70 n shown on the lower side in FIG. 6 is set higher in priority level.
  • the action switching module 71 notifies the learning module 72 , the emotion model 73 and the instinct model 74 to the effect that the action has been completed on the basis of action-completed information to be given by the output semantics converter 57 after the action is completed.
  • the learning module 72 inputs the recognition result of an instruction, received as approach from the user, such as “was patted” and “was petted.”
  • the learning module 72 changes transition probability, to which the behavioral model 70 1 to 70 n corresponding thereto in the behavioral model library 70 corresponds in such a manner as to lower, when it “was patted (scolded),” the revelation probability of the action and to raise, when it “was petted (praised),” the revelation probability of the action.
  • the emotion model 73 holds parameters for expressing intensity of the emotion for each emotion.
  • the emotion model 73 is adapted to successively update the parameter value for each of these emotions on the basis of specific recognition results of “has been patted,” “has been stroked” or the like to be given by the input semantics converter module 56 respectively, elapsed time, a notice from the action switching module 71 , or the like.
  • the emotion model 73 replaces this arithmetic result with the current parameter value E[t] of its emotion to thereby update the parameter value of the emotion.
  • it has been determined in advance the parameter value of which emotion should be updated in response to each recognition result and notices from the action switching module 71 , and when a recognition result of, for example, “has been patted” is given, the parameter value for the emotion of “ANGER” rises, while when a recognition result of “has been stroked” is given, the parameter value for the emotion of “JOY” rises.
  • the instinct model 74 holds, concerning four desires of “EXERCISE,” “AFFECTION,” “APPETITE” and “CURIOSITY,” which are independent of one another, a parameter for representing intensity of the desire for each of these desires.
  • the instinct model 74 is adapted to successively update these parameter values for the desires on the basis of recognition results to be given from the input semantics converter module 56 , elapsed time, notices from the action switching module 71 or the like respectively.
  • the parameter values for each emotion and each desire are regulated so as to fluctuate within a range of 0 to 100 respectively, and values for the coefficients k e and k i are also individually set for each emotion and for each desire.
  • the output semantics converter module 57 of the middleware layer 40 gives, as shown in FIG. 4 , such an abstract action command as “ADVANCE,” “JOY,” “YELP” or “TRACKING (Chase a ball)” to be given by the action switching module 71 of the application layer 41 as described above to the corresponding signal processing modules 58 to 64 of the output system 65 .
  • these signal processing modules 58 to 64 When an action command is given, these signal processing modules 58 to 64 generate a servo command value to be given to a corresponding actuator 25 1 to 25 n ( FIG. 2 ) in order to take the action, audio data of sound to be outputted from a speaker 24 ( FIG. 2 ) and/or driving data to be given to the LEDs serving as “Eyes,” on the basis of the action command to successively transmit these data to the corresponding actuator 25 1 to 25 n , speaker 24 or LEDs through a virtual robot 33 of the robotic server object 32 and the signal processing circuit 14 ( FIG. 2 ) successively.
  • this pet robot 1 is adapted to be able to autonomously act in response to conditions of the self and surroundings and any instruction and approach from the user in accordance with the control program.
  • This pet robot 1 has the growth function which continuously changes the action as if it “grew” in response to approach or the like from the user.
  • this pet robot 1 is provided with five “growth stages” of “tweety,” “baby period,” “child period,” “young period” and “adult period” as growth processes.
  • the behavioral model library 70 ( FIG. 5 ) in the application layer 41 is provided with behavioral models 70 k(1) to 70 k(5) which are brought into correspondence with the “tweety,” “baby period,” “child period,” “young period” and “adult period” respectively as a behavioral model 70 k as shown in FIG.
  • growth-related conditional item relating to “growth” such as the “Operation,” “Action” or the like, of each conditional item such as the above described “when the battery residual amount is getting low.”
  • the behavioral model library 71 is adapted to determine the next action through the use of the behavioral model 70 k(1) of the “tweety period” in the initial stage concerning these growth-related conditional items.
  • each behavioral model 70 k(1) of the “tweety period” has a small number of nodes NODE 1 to NODE n ( FIG. 7 ), and the contents of actions to be outputted from these behavioral models 70 k(1) are also actions or operations corresponding to the “tweety period” like “walking in pattern 1 (walking pattern for “tweety period”)” or “making sounds in pattern 1 (bowwow pattern for “tweety period”).”
  • this pet robot 1 acts, in the initial stage, so that it becomes as “simple” movement as simply “walk,” “stand” and “lie down” concerning, for example, “operation,” and so that it becomes “monotonous” by repeatedly performing the same action concerning “action” in accordance with each behavioral model 70 k(1) in the “tweety.”
  • the learning module 72 ( FIG. 5 ) in the application layer 41 holds parameters (hereinafter, referred to as “growth parameters”) representing degrees of “growth” therein, and is adapted to successively update the value of growth parameter in accordance with the number of times, elapsed time or the like of approaches (instructions) such as “was petted” or “was patted” from the user on the basis of the recognition result, elapsed time information or the like to be given from the input semantics converter module 56 .
  • growth parameters parameters representing degrees of “growth” therein, and is adapted to successively update the value of growth parameter in accordance with the number of times, elapsed time or the like of approaches (instructions) such as “was petted” or “was patted” from the user on the basis of the recognition result, elapsed time information or the like to be given from the input semantics converter module 56 .
  • the learning module 72 evaluates the value of this growth parameter every time the power to the pet robot 1 is turned on, and when the value exceeds a threshold which has been set in advance by bringing it into correspondence with the “baby period,” notifies the behavioral model library 70 of this. When this notice is given, the behavioral model library 70 changes, concerning each growth-related conditional item described above, the behavioral models to be used respectively to the behavioral model 70 k(2) of the “BABY.”
  • this pet robot 1 acts, in accordance with these behavioral models 70 k(2) thereafter, so as to become “slightly higher and more complicated” movement by increasing the number of actions, for example, “operation,” and so as to become action “at least with an objective” concerning the “action.”
  • the learning module 74 notifies the behavioral model library 71 of this. Every time this notice is given, the behavioral model library 71 successively changes, concerning each growth-related conditional item described above, the behavioral models to be used respectively to the behavioral models 70 k(3) to 70 k(5) of the “child period,” “young period” and “adult period.”
  • each behavioral model 70 k(3) to 70 k(5) in the “child period,” “young period” and “adult period” has a greater number of nodes NODE 0 to NODE n as the “growth stage” rises respectively, and the contents of actions to be outputted from these behavioral models 70 k(3) to 70 k(5) also become higher in degree of difficulty and level of complicatedness of the action as the “growth stage” rises.
  • this pet robot 1 successively stepwise changes the “operation” from “simple” toward “higher level and more complicated,” and the “action” from “monotonous” toward “action with intention” as the “growth stage” rises (more specifically, changes from “tweety period” to “baby period,” from “baby period” to “child period,” from “child period” to “young period” and from “young period” to “adult period”).
  • this pet robot 1 is adapted to cause its action and operation to “grow” in five stages: “tweety,” “baby period,” “child period,” “young period” and “adult period” in accordance with instructions to be given by the user and elapsed time.
  • the growth model of the pet robot 1 is a model which branches off in and after the “child period” as shown in FIG. 10 .
  • the behavioral model library 70 in the application layer 41 ( FIG. 5 ) is provided with a plurality of behavioral models respectively as behavioral models 70 k(3) to 70 k(5) for the “child period,” “young period” and “adult period” concerning each growth-related conditional item described above.
  • the behavioral model 70 k(3) for, for example, the “child period” in each growth-related conditional item, there are prepared a behavioral model (CHILD 1 ) for causing the pet robot to perform action of a “fidgety” character having careless and quick movement, and a behavioral model (CHILD 2 ) for causing it to perform action of a “calm” character having smoother and slower movement than the CHILD 1 .
  • the behavioral model 70 k(4) for the “young period” there are prepared a behavioral model (YOUNG 1 ) for causing it to perform action of a “rough” character having more careless and quicker movement than the “fidgety” character of the “child period,” a behavioral model (YOUNG 2 ) for causing it to perform action and operation of an “ordinary” character having slower and smoother movement than the YOUNG 1 , and a behavioral model (YOUNG 3 ) for causing it to perform action of “calm” character having slower operation and less amount of action than the YOUNG 2 .
  • a behavioral model (YOUNG 1 ) for causing it to perform action of a “rough” character having more careless and quicker movement than the “fidgety” character of the “child period”
  • a behavioral model (YOUNG 2 ) for causing it to perform action and operation of an “ordinary” character having slower and smoother movement than the YOUNG 1
  • a behavioral model (YOUNG 3 ) for causing it to perform action of
  • the learning module 72 ( FIG. 5 ) in the application layer 41 designates the behavioral model (CHILD 1 , CHILD 2 , YOUNG 1 to YOUNG 3 and ADULT 1 to ADULT 4 ) of which “character” should be used as a behavioral model in the next “growth stage” in each growth-related conditional item on the basis of the number of times or the like of “was petted” and “was stroked,” in and after the “child period,” in the “growth stage” thereof.
  • the behavioral model CHILD 1 , CHILD 2 , YOUNG 1 to YOUNG 3 and ADULT 1 to ADULT 4
  • the behavioral model library 70 changes, on the basis of this designation, the behavioral models 70 k(3) to 70 k(5) to be used in and after the “child period” concerning each growth-related conditional item to behavioral models (CHILD 1 , CHILD 2 , YOUNG 1 to YOUNG 3 and ADULT 1 to ADULT 4 ) of the designated “character” respectively.
  • this pet robot 1 is adapted to change also its “character” along with the “growth” in response to approaches or the like from the user as if a genuine animal formed its character in accordance with how the owner raise or the like.
  • the behavioral model 70 k of each growth-related conditional item has vast state space in which each of all the action patterns which the pet robot 1 is capable of revealing has been stored.
  • these behavioral models 70 k use only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” permit transition to a state space portion to be newly increased (state space portion in which actions and a series of action patterns enabling to be newly revealed will be generated), and sever a state space portion which has been no longer used (state space portion in which actions and a series of action patterns which will not be caused to be revealed will be generated) to thereby generate behavioral models 70 k(1) to 70 k(5) in each “growth stage.”
  • this pet robot 1 uses a method of changing the transition probability to the state space in response to the “growth.”
  • a transition condition from a node NODE A to a node NODE B is that “detecting a ball,” and that a series of node groups 90 from the node NODE B is used to perform a series of action patterns to “approach the ball to kick it,” when the ball has been found at the node NODE A , an action pattern PA 1 to “chase the ball to kick it” can be revealed at transition probability P 1 , but if the transition probability P 1 is “0,” such an action pattern PA 1 will never be revealed.
  • this transition probability P 1 is set to “0” in advance in the initial stage and this transition probability P 1 is caused to be changed to a predetermined numerical value higher than “0” when the “growth stage” concerned is reached.
  • this action pattern PA 1 is caused to be forgotten when a certain “growth stage” is reached, the transition probability from the node NODE A to the node NODE B is caused to be changed to “0” when the “growth stage” is reached.
  • each behavioral model of the above described growth-related conditional item is provided with such files (hereinafter, referred to as difference files) 91 A to 91 D as shown in FIG. 12 by bringing it into correspondence with each “growth stage” of the “baby period,” “child period,” “young period” and “adult period” respectively.
  • these difference files 91 A to 91 D there are contained the node name (number) of a node (which corresponds to the node A in FIG. 11 ), at which the transition probability should be changed as described above, a place in the state transition table 80 ( FIG. 8 ) of the node, at which the transition probability should be changed, and the transition probability at the place concerned after the change in order to reveal the new action as described above on rising to the. “growth stage.”
  • the behavioral model 70 k of each growth-related conditional item generates action in accordance with the behavioral model 70 k(1) for the “tweety period” in the initial stage, and when a notice to the effect that it has “grown” is given from the learning module 72 ( FIG. 5 ) as described above thereafter, the transition probability at each of respectively designated places is changed to the numerical value designated concerning each node described in the difference files 91 to 91 D concerned on the basis of the difference files 91 to 91 D for the corresponding “growth stage.
  • the transition probability in the “first column” and the “first row” in the area when the pet robot has grown to the “baby period,” in the behavioral model 70 k for each growth-related conditional item, the transition probability in the “first column” and the “first row” in the area. (in FIG. 8 , a portion which is below the “Output Action” row and on the right of the “Data Range” column) in the state transition table 80 , in which the transition probability for the node NODE 100 has been described, will be changed to “20”[%], and the transition probability in the “first column” and the “n-th row” in the state transition table concerned will be changed to “30”[%], and so forth respectively.
  • transition probability in which the transition probability until then has been “0” (that is, a transition to a node, which serves as the starting point of a series of action patterns, has been prohibited) and transition probability, in which the transition probability after the change becomes “0,” (that is, a transition to a node, which serves as the starting point of a series of action patterns, is prohibited).
  • transition probability is changed from “0” to a predetermined numerical value in this manner, or the transition probability after the change becomes “0,” whereby the series of action patterns may become revealed in a new “growth stage,” or the series of action patterns may not be revealed.
  • values for each transition probability in each difference file 91 A to 91 D are selected in such a manner that the sum of each transition probability to be included in the corresponding column in the state transition table 80 after the change amounts, to 100[%].
  • this pet robot 1 uses only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” severs the state space portion which has been no longer used except for the core, and permits a transition to a state space portion which should be newly increased to thereby generate behavioral models 70 k(1) to 70 k(n) in each “growth stage,” and to act in accordance with the behavioral models 70 k(1) to 70 k(n) concerned thus generated.
  • this pet robot 1 is capable of reducing discontinuity in the output action before and after the “growth” to thereby express the “growth” more naturally because the state space of the behavioral models 70 k(1) to 70 k(n) in each “growth stage” continuously changes.
  • the state space portion for generating basic actions is shared in all “growth stages,” the learning result of the basic action can be successively carried forward to the next “growth stage.”
  • this pet robot 1 since the state space portion for generating basic action is shared in all “growth stages” as described above, it is easy to prepare behavioral models 70 k(1) to 70 k(n) for each “growth stage,” and it is also possible to curtail the amount of data for the entire behavioral models as compared with a case where individual behavioral models are prepared for each “growth stage” as in the conventional case.
  • this pet robot 1 severs the unnecessary state space for a series of action patterns in accordance with the “growth” as described above, and permits a transition to necessary state space for a series of action patterns, to thereby generate behavioral models 70 k(1) to 70 k(n) in each “growth stage.” Therefore, it is possible to divide each series of action patterns into parts, and to further facilitate an operation for generating the behavioral model 70 k of each growth-related conditional item by that much.
  • this pet robot 1 uses only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” severs the state space portion which has been no longer used except for the core, and permits a transition to a state space portion which should be newly increased to thereby generate the behavioral models 70 k(1) to 70 k(n) in each “growth stage,” thereby it is possible to reduce discontinuity in the output action before and after the “growth” by continuously changing the state space for the behavioral models 70 k(1) to 70 k(n) in each “growth stage.”
  • the “growth” can be expressed more naturally, and it is possible to realize a pet robot capable of improving the entertainment characteristics.
  • the behavioral model 70 k for each growth-related conditional item adds the data for the action pattern file 102 B for the “baby period” to the basic behavioral model 101 in place of the data for the action pattern file 102 A for the “tweety,” and converts each virtual node NODE K1 to NODE Kn within the basic behavioral model 101 to a real node on the basis of the correspondence table 103 B for the “baby period” stored in the difference file 103 to thereby generate the behavioral model 70 k(2) for the “baby period,” for generating action on the basis of the behavioral model 70 k(2) concerned.
  • this pet robot 100 is adapted to change its action in response to the “growth” by successively changing the action patterns PA 1 to PA n to be brought into correspondence with each virtual node NODE K1 to NODE Kn within the basic behavioral model 101 respectively along with the “growth.”
  • this pet robot 100 changes its action in response to the “growth” by successively changing the action patterns PA 1 to PA n to be brought into correspondence with each virtual node NODE K1 to NODE Kn within the basic behavioral model 101 respectively along with the “growth.”
  • each action pattern PA 1 to PA n it is possible to divide, into parts, each action patterns PA 1 to PA n to be brought into correspondence with each virtual node NODE K1 to NODE Kn within the basic behavioral model 101 respectively, and to further facilitate an operation for generating the behavioral model 70 K of each growth-related conditional item by that much.
  • reference numeral 110 generally denotes a pet robot according to a third embodiment.
  • Leg units 112 A to 112 D are respectively connected to the front right, front left, rear right, and rear left parts of a body unit 111 and a head unit 113 and a tail unit 114 are respectively connected to the front end part and the rear end part of the body unit 111 .
  • a control unit 126 a CPU (Central Processing Unit) 120 , a DRAM (Dynamic Random Access Memory) 121 , a flash ROM (Read Only Memory) 122 , a PC (Personal Computer) card interface circuit 123 and a signal processing circuit 124 are connected to each other with an internal bus 125 and a battery 127 serving as the power source of the pet robot 110 are contained. Further, an angular velocity sensor 128 and an acceleration sensor 129 for detecting the orientation or the acceleration of the movement of the pet robot 110 or the like are also accommodated in the body unit 111 .
  • a CPU Central Processing Unit
  • DRAM Dynamic Random Access Memory
  • flash ROM Read Only Memory
  • PC Personal Computer
  • a CCD (Charge Coupled Device) camera 130 for picking up the image of an external condition
  • a touch sensor 131 for detecting applied pressure due to physical actions from a user such as “petting” or “patting”
  • a distance sensor 132 for measuring a distance to a front object
  • a microphone 133 for collecting external sounds
  • a speaker 134 for outputting sounds such as bark and LEDs (Light Emitting Diode) (not shown) corresponding to the “eyes” of the pet robot 110 .
  • actuators 135 1 to 135 n for the number of degrees of freedom and potentiometers 136 1 to 136 n are respectively disposed in the joint parts of the leg units 112 A to 112 D, the connecting parts between the leg unit 112 A to 112 D and the body unit 111 , the connecting part between the head unit 113 and the body unit 111 and the connecting part between the tail 114 A of the tail unit 114 or the like.
  • the LED and the actuators 135 1 to 135 n such as the angular velocity sensor 128 , the acceleration sensor 129 , the touch sensor 131 , the distance sensor 132 , the microphone 133 , the speaker 134 and the potentiometers 136 1 to 136 n are connected to the signal processing circuit 124 of the control unit 126 with corresponding hubs 27 1 to 27 N .
  • the CCD camera 130 and the battery 127 are directly connected to the signal processing circuit 124 .
  • the signal processing circuit 124 sequentially fetches sensor data, image data and audio data supplied from the above described sensors and sequentially stores them in prescribed positions in the DRAM 121 through an internal bus 125 . Further, the signal processing circuit 124 sequentially fetches battery residual amount data indicating battery residual amount supplied from the battery 127 as well as them and stores it in a prescribed position in the DRAM 121 .
  • the sensor data, the image data, the audio data and the battery residual amount data stored in the DRAM 121 in such a way are used when the CPU 120 controls the operation of the pet robot 110 after that.
  • the CPU 120 reads out a control program stored in a memory card 138 inserted in the PC card slot (not shown) of the body unit 111 or in the flash ROM 122 directly or through the PC card interface circuit 123 and stores it in the DRAM 121 .
  • the CPU 120 judges its own and circumferential states or whether or not there exist an instruction and an action from a user on the basis of the sensor data, the image data, the audio data and the battery residual amount data which are sequentially stored in the DRAM 121 from the signal processing circuit 124 as mentioned above.
  • the CPU 120 generates the audio data as required and supplies the audio data to the speaker 134 as an audio signal through the signal processing circuit 124 to output a voice based on the audio signal to an external part, and turn on or off or flickers the above described LEDs.
  • the pet robot 110 is designed to autonomously conduct in accordance with its own and circumferential states and the instruction and action from the user.
  • FIG. 20 The software configuration of the above described control program in the pet robot 110 is shown in FIG. 20 .
  • a device driver layer 140 is located in the lowermost layer of the control program and composed of a device driver set 141 having a plurality of device drivers.
  • each device driver is an object which is allowed to directly access a hardware employed in an ordinary computer such as the CCD camera 130 ( FIG. 19 ) or a timer and executes a processing by receiving an interruption from a corresponding hardware.
  • a robotic server object 142 is located in a layer higher than the device driver layer 140 .
  • the robotic server object 142 comprises a virtual robot 143 composed of software groups for providing interfaces to have access to, for instance, the hardware such as the above described various types of sensors or the actuators 135 1 to 135 n , etc., a power manager 144 composed of software groups for managing the switching of the power source or the like, a device driver manager 145 composed of software groups for managing various kinds of other device drivers and a designed robot 146 composed of software groups for managing the mechanism of the pet robot 110 .
  • a manager object 147 comprises an object manager 148 and a service manager 149 .
  • the object manager 148 comprises software groups for managing the start and end of each of software groups included in the robotic server object 142 , a middleware layer 150 and an application layer 151 .
  • the service manager 149 comprises software groups for managing the connection of the objects on the basis of connection information between the objects written in a connection file stored in the memory card 138 ( FIG. 19 ).
  • the middleware layer 150 is located in a layer higher than the robotic server object 142 and is composed of software groups for providing the basic functions of the pet robot 110 such as image processing or audio processing.
  • the application layer 151 is located in a layer higher than the middleware layer 150 and is composed of software groups for determining the action of the pet robot 110 on the basis of the result processed by the software groups constituting the middleware layer 150 .
  • the specific software configurations of the middleware layer 150 and the application layer 151 are respectively shown in FIGS. 21 and 22 .
  • the middleware layer 150 comprises a recognition system 167 including signal processing modules 160 to 165 for recognizing sound scales, for detecting a distance, for detecting a posture, for a touch sensor, for detecting a movement and for recognizing a color and an input semantics converter module 166 and an output system 175 including an output semantics converter module 167 and signal processing modules 168 to 174 for managing a posture, for tracking, for reproducing a motion, for walking, for standing up after falling down, for lighting the LEDs and reproducing sounds and the like.
  • a recognition system 167 including signal processing modules 160 to 165 for recognizing sound scales, for detecting a distance, for detecting a posture, for a touch sensor, for detecting a movement and for recognizing a color and an input semantics converter module 166 and an output system 175 including an output semantics converter module 167 and signal processing modules 168 to 174 for managing a posture, for tracking, for reproducing a motion, for walking, for standing up after falling down, for lighting the LEDs and reproduc
  • the respective signal processing modules 160 to 165 of the recognition system 167 fetch the corresponding data from among the sensor data, the image data and the audio data read from the DRAM 121 ( FIG. 19 ) by the virtual robot 143 of the robotic server object 142 , execute prescribed processing on the basis of the fetched data and supply the processed results to the input semantics converter module 166 .
  • the input semantics converter module 166 recognizes its own and circumferential states or the instructions and the actions from the user such as “detected a ball”, “detected a falling-down”, “was touched”, “was patted”, “heard the scales of do, mi, so”, “detected a moving object” or “detected an obstacle”, on the basis of the processed results supplied from the signal processing modules 160 to 165 and outputs the recognition results to the application layer 151 ( FIG. 19 ).
  • the application layer 151 comprises five modules including a behavioral model library 180 , an action switching module 181 , a learning module 182 , an emotion model 183 and an instinct model 184 .
  • each independent behavioral models 180 1 to 180 n are provided so as to correspond to previously selected several condition items such as “when a battery residual amount is getting low”, “when standing up after falling-down”, “when avoiding an obstacle”, “when expressing an emotion”, “when detecting a ball” or the like.
  • these behavioral models 180 1 to 180 n respectively determine subsequent actions by referring to the parameter values of corresponding emotional behaviors held in the emotion model 183 or the parameter values of corresponding desires held in the instinct model 184 if necessary as mentioned later and output the determination results to the action switching module 181 .
  • the respective behavioral models 180 1 to 180 n uses, as methods for determining subsequent actions, an algorithm called a probability automaton for probably determining to which node NODE 0 ′ to NODE n ′ the transition is made from one node (state) of NODE 0 ′ to NODE n ′, on the basis of a transition probability P 1 ′ to P n ′ respectively set to arcs ARC 1 to ARC n1 for connecting the NODE 0 ′ to NODE n ′ together.
  • a probability automaton for probably determining to which node NODE 0 ′ to NODE n ′ the transition is made from one node (state) of NODE 0 ′ to NODE n ′, on the basis of a transition probability P 1 ′ to P n ′ respectively set to arcs ARC 1 to ARC n1 for connecting the NODE 0 ′ to NODE n ′ together.
  • each behavioral model 180 1 to 180 n has a state transition table 190 as shown in FIG. 25 for each of these nodes NODE 0 ′ to NODE n ′, so as to correspond to each of the nodes NODE 0 ′ to NODE n ′ which respectively form their own behavioral models 180 1 to 180 n .
  • the “size (SIZE)” of the ball which is given together with the recognition result needs to be located within a range of “0 to 1000”, as a condition to shift to other node, or in the case where the recognition result of “detected an obstacle (OBSTACLE)” is obtained, a “distance (DISTANCE)” to the obstacle which is given together with the recognition result needs to be located within a range of “0 to 100”, as a condition to shift to other node.
  • the node can shift to another node when the parameter value of any one of “joy (JOY)”, “surprise (SURPRISE)” or “sadness (SADNESS)” of the parameter values of respective emotional behaviors and respective desires held in the emotion model 183 and the instinct model 184 to which the behavioral models 180 1 to 180 n periodically refer is within a range of “50 to 100”.
  • the names of nodes which can shift from the nodes NODE 0 ′ to NODE n ′ are enumerated in the row of “transition destination node” in the column of “transition probability to other nodes”. Further, the transition probability to each of other nodes NODE 0 ′ to NODE n ′ to which the node can shift when all conditions described in the rows of “input event name”, “data name” and “range of data” are completed, is respectively described in a corresponding position in the column of “transition probability to other nodes”. Actions to be outputted upon shift to the nodes NODE 0 to NODE n are described in the row of “output action” in the column of “transition probability to other nodes”. In this connection, the sum of the probability in respective rows in the column of “transition probability to another nodes” is 100 [%].
  • the node NODE 100 seen in the state transition table 190 shown in FIG. 25 for instance, in the case where the recognition result of “detected a ball (BALL)” in which the “SIZE (size)” of the ball is located within a range of “0 to 1000” is given, the node NODE 100 ′ can shift to a “node NODE 120 ′” with the probability of “30 [%]” and the action of “ACTION 1” is outputted at that time.
  • the behavioral models 180 1 to 180 n are configured so that the nodes NODE 0 ′ to NODE n ′ respectively described in such a state transition table 190 are linked together.
  • the action modules 180 1 to 180 n stochastically determine next actions by using the state transition table 190 of the corresponding nodes NODE 0 ′ to NODE n ′ and output the determination results to the action switching module 181 .
  • the action switching module 181 selects the action outputted from the behavioral models 180 1 to 180 n high in predetermined priority from among the actions outputted from the behavioral models 180 1 to 180 n of the behavioral model library 180 and transmits a command for executing the action (called it an action command, hereinafter) to the output semantics converter 167 of the middleware layer 150 .
  • the behavioral models 180 1 to 180 n described in the lower side in FIG. 23 are set to be higher in priority.
  • the action switching module 181 informs the learning module 182 , the emotion model 183 and the instinct model 184 of the completion of the action on the basis of action completion information supplied from the output semantics converter 167 after the completion of the action.
  • the learning module 182 inputs the recognition results of teaching received as actions from the user such as “hit” or “patted” of the recognition results supplied form the input semantics converter 56 .
  • the learning module 182 changes the corresponding transition probability to the behavioral models 180 1 to 180 n so as to lower the appearance probability of an action upon “hit (scolded)”, and raise the appearance probability of an action upon “patted (praised)” on the basis of the recognition result and information from the action switching module 181 .
  • the emotion model 183 holds a parameter indicating the intensity of an emotional behavior for each emotional behavior of the total of six emotional behaviors including “joy”, “sadness”, “anger”, “surprise”, “disgust” and “fear”.
  • the emotion module 183 is adapted to sequentially update the parameter values of these emotional behaviors on the basis of the specific recognition results such as “hit” and “patted”, etc. respectively supplied from the input semantics converter module 166 , the lapse of time and information from the action switching module 181 or the like.
  • the emotion model 183 calculates the parameter value E[t+1] of an emotional behavior during a next cycle by using a following equation (4) for a prescribed cycle, assuming that the quantity of fluctuation of an emotional behavior calculated by a prescribed operation expression on the basis of the recognition result from the input semantics converter 56 and a degree (preset) of the action of the pet robot 110 at that time which acts on the emotional behavior, the parameter value of each desire held by the instinct model 184 and the degree (preset) of the action of the pet robot 110 at that time which acts on the emotional behavior, a degree of suppression and stimulation receiving from other emotional behavior, the lapse of time or the like is ⁇ E[t]′, the present parameter value of the emotional behavior is E[t]′, and a coefficient expressing a rate (called it a sensitivity, hereinafter) of changing the emotional behavior depending on the recognition result, etc. is k e ′.
  • E [t+ 1]′ E [t]′+k e ′ ⁇ E [t]′ (4)
  • the emotion model 183 replaces the calculated result by the present parameter value E[t]′ of the emotional behavior to update the parameter value of the emotional behavior. It is previously determined to update which emotional behavior of all emotional behaviors in respect of its parameter value relative to each of the recognition results and the information from the action switching module 181 . For instance, in the case where the recognition result of “hit” is supplied, the parameter value of the emotional behavior of “anger” is raised. In the case where the recognition result of “patted” is supplied, the parameter value of the emotional behavior of “joy” is increased.
  • the instinct model 184 holds a parameter indicating the intensity of a desire for each desire of mutually independent four desires including an “exercise desire”, an “affection desire”, an “appetite desire” and a “curiosity desire”. Then, the instinct model 184 is adapted to sequentially update the parameter values of these desires respectively on the basis of the recognition results supplied from the input semantics converter module 166 , the lapse of time and the notification from the action switching module 181 or the like.
  • the instinct model 184 calculates the parameter value I[k+1]′ of a desire during a next cycle by using a following equation for a prescribed cycle, assuming that the quantity of fluctuation of the desire calculated by a prescribed operation expression on the basis of the action output of the pet robot 110 , the lapse of time and the recognition results, etc. is ⁇ I[k]′ in respect of the “exercise desire”, the “affection desire” and the “curiosity”, the present parameter value of the desire is I[k]′, and a coefficient indicating the sensitivity of the desire is k i ′.
  • I [K+]′ I [k]′+k i ′ ⁇ I [k]′ (5)
  • the calculated result is replaced by the present parameter value I[k]′ to update the parameter value of the desire. It is previously determined which desire is changed in respect of its parameter value relative to the action output or the recognition results, etc. For instance, when there is a notification (information of an action) sent from the action switching module 181 , the parameter value of the “exercise desire” decreases.
  • the instinct model 184 calculates the parameter value I[k]′ of the “appetite desire” in accordance with a following equation for a prescribed cycle, assuming that a battery residual amount is B L ′ on the basis of the battery residual amount data supplied through the input semantics converter module 166 .
  • I [k]′ 100 ⁇ B L (6)
  • the calculated result is replaced by the present parameter value I[k]′ of the appetite desire to update the parameter value of the “appetite desire”.
  • the parameter values of the emotional behaviors and desires are regulated so as to respectively vary within a range of 0 to 100. Further, the values of coefficients k e ′ and k i ′ are individually set for each emotional behavior and for each desire.
  • the output semantics converter module 167 of the middleware layer 150 supplies abstract action commands such as “advancement”, “joy”, “cry” or “tracking (chase a ball)” supplied from the action switching module 181 of the application layer 151 as mentioned above to the corresponding signal processing modules 168 to 174 of the output system 175 , as shown in FIG. 21 .
  • these signal processing modules 168 to 174 generate servo command values to be supplied to the actuators 135 1 to 135 n ( FIG. 19 ) for carrying out the actions, the audio data of sound outputted from the speaker 134 ( FIG. 19 ) or driving data supplied to the LED of the “eye” on the basis of the action commands and sequentially transmit these data to the corresponding actuators 135 1 to 135 n , the speaker 134 or the LEDs through the virtual robot 143 of the robotic server object 142 and the signal processing circuit 124 ( FIG. 19 ).
  • the pet robot 110 is designed to carry out the autonomous actions in accordance with its own and circumferential states and the instructions and the actions from the user on the basis of a control program.
  • a growth function mounted on the pet robot 110 is mounted the growth function for changing its action in accordance with the actions from the user as if a real animal grew.
  • behavioral models 180 k(1) to 180 k(5) are respectively provided corresponding to the “tweety period”, “baby period”, “child”, the “young” and the “adult period.” as behavioral models 180 k as shown in FIG.
  • condition items related to growth in respect of all condition items (called them condition items related to growth, hereinafter) related to four items of a “walking state”, a “motion”, an “action” and a “sound (bark)” of the respective condition items such as the above described “when the battery residual amount gets low”. Then, concerning these condition items related to growth in the behavioral model library 181 , a next action is determined by employing the behavioral model 180 k(1) of the “tweety period” during an initial time.
  • each behavioral model 180 k(1) in the “tweety period” has the small number of nodes NODE 0 ′ to NODE n ′ ( FIG. 24 ).
  • the contents of the action outputted from these behavioral models 180 k(1) are equivalent to actions or the contents of actions which correspond to those of the “tweety period” such as “move forward in accordance with a pattern 1 (walking pattern for the “tweety period” or “cry in accordance with a pattern 1 (crying pattern for the “tweety period”).
  • the pet robot 110 acts and operates in accordance with each behavioral model 180 k(1) of the “tweety period” during the initial time, for instance, “walks totteringly” with small steps in “a walking state”, makes a “simple movement” such as simply “walking”, “standing”, “sleeping” as for “motion”, makes a “monotonous” action by repeating the same actions as for “action” and generates a “small and short” cry as for “sound”
  • the learning module 182 ( FIG. 22 ) of the application layer 151 holds a parameter (called it a growth parameter, hereinafter) indicating the degree of “growth” therein and sequentially updates the value of the growth parameter in accordance with the number of times of actions (teaching) from the user such as “patted” or “hit” or the lapse of time on the basis of the recognition results or the lapse of time information, etc. supplied from the input semantics converter module 166 .
  • the learning module 182 evaluates the value of the growth parameter every time the power source of the pet robot 110 is turned on. When the above described value exceeds a preset threshold value so as to correspond to the “baby period”, the learning module 182 informs the behavioral model library 180 of it. When being given this information, the behavioral model library 180 changes the behavioral models used respectively for the above described condition items related to growth to the behavioral models 180 k(2) of the “baby period”.
  • each behavioral model 180 k(2) of the “baby period” has the number of nodes NODE 0 ′ to NODE n ′ larger than those of the behavioral model 180 k(1) in the “tweety period”. Further, the contents of the actions outputted from these behavioral models 180 k(2) are more difficult and more complicated in a level (growth level) than those of the actions in the “baby period”.
  • the pet robot 110 then acts and operates in accordance with these behavioral models 180 k(2) , so as to “slightly securely walk” by increasing the rotating speed of the respective actuators 135 1 to 135 n ( FIG. 19 ), for instance, as for the “walking state”, to make “a little higher and more complicated” movement by increasing the number of actions, as for the “motion”, to have “a little purposeful” conduct as for the “action” and to have “a little long and loud sounds” as for the “sound”.
  • the learning module 184 informs the behavioral model library 181 of it in the similar way to the above described case. Further, the behavioral model library 181 sequentially changes the behavioral models to the behavioral models 180 k(3) to 180 k(5) in the “child”, the “young” and the “adult period”, respectively as for the above described condition items related to growth every time this information is supplied thereto.
  • the “walking state” changes from “walking totteringly” to “walking securely”
  • the “motion” changes from a “simple” motion to a “high level and complicated” motion
  • the “action” changes from the “monotonous action” to the “purposeful action”
  • the “sound” changes from the “small and short sound” to the “long and large sound”, sequentially stepwise.
  • the action and operation are designed to “grow” in the course of five stages of the “tweety period”, the “baby period”, the “child”, the “young” and the “adult period” in accordance with teaching given from the user and the lapse of time.
  • the growth model of the pet robot 110 is a model which branches after the “young” as shown in FIG. 27 .
  • a plurality of behavioral models are prepared as the behavioral models 180 k(3) to 180 k(5) of the “child period”, the “young period” and the “adult period” respectively for the above described condition items related to growth.
  • a behavioral model for performing an action of a “wild” character whose movement is rough and rapid
  • a behavioral model for performing an action of a “gentle” character whose movement is smoother and slower than the former.
  • the behavioral models of the “young period” are prepared an action mode (YOUNG 1 ′) for performing an action of a “rough” character whose movement is rougher and faster than that of the “wild” character of the “child period”, a behavioral model (YOUNG 2 ′) for performing an action and operation of an “ordinary” character whose movement is slower and smoother than that of the YOUNG 1 ′, and a behavioral model (YOUNG 3 ′) for performing an action of a “gentle” character whose movement is much slower and whose amount of movement is lower than that of the YOUNG 2 ′.
  • behavioral models of the “adult period” are prepared a behavioral model (ADULT 1 ′) for performing an action of a very irritable and “aggressive” character whose movement is rougher and faster than that of the “rough” character of the “young period”, a behavioral model (ADULT 2 ′) for performing an action of an irritable and “wild” character whose movement is smoother and slower than that of the ADULT 1 ′, a behavioral model (ADULT 3 ′) for performing an action of a “gentle” character whose movement is smoother and slower and whose amount of movement is lower than that of the ADULT 2 ′ and a behavioral model (ADULT 4 ) for performing an action of a “quiet” character whose movement is much slower and whose amount of movement is lower than that of the ADULT 3 ′.
  • ADULT 1 ′ for performing an action of a very irritable and “aggressive” character whose movement is rougher and faster than that of the “rough” character of the
  • the learning module 182 designates which character is used from among the characters, and which behavioral model is used from among the behavioral models CHILD 1 ′, CHILD 2 ′, YOUNG 1 ′ to YOUNG 3 ′ and ADULT 1 ′ to ADULT 4 ′ as the behavioral model of a next “growth stage” of the condition items related to growth on the basis of the number of times of “hit” and “patted”, etc. in the “growth stage” after the “child period”.
  • the behavioral model library 180 changes the behavioral models used after the “child period” to the behavioral models of the designated “characters” respectively for the condition items related to growth on the basis of the designation.
  • the “character” in a next “growth stage” is determined depending on the “character” in the present “growth stage” upon shift to the next “growth stage”.
  • a character can be changed only to another character of the “characters” connected together by arrow marks. Therefore, for example, in the case where the behavioral model (CHILD 1 ′) of a “wild” character is employed in the “child period”, it cannot be changed to the behavioral model (YOUNG 3 ′) of a “gentle” character in the “young period”.
  • the pet robot 110 is designed to change its “character” with its “growth” in accordance with the actions from the user or the like as if the character of a real animal were formed by a breeding method of an owner.
  • the emotion and instinct are designed to “grow” as the above described actions “grow”.
  • emotion parameter files 200 A to 200 E in which the values of coefficient k e ′ of the equation (1) relative to each emotional behavior for each of the “growth stages” as shown in FIGS. 29 (A) to 29 (E) are respectively described.
  • the emotion model 183 is designed to cyclically update the parameter values of the emotional behaviors respectively on the basis of the equation (4) by using the values of coefficient k e ′ described in the emotion parameter file 200 A for the “tweety period” during the initial time (namely, the “growth stage” is a stage of the “tweety period”.
  • the learning module 182 supplies information for informing the rise of the stage to the emotion model 183 as in the case of the behavioral model library 180 ( FIG. 22 ). Then, the emotion model 183 respectively updates the values of coefficient k e of the equation (4) for the respective emotional behaviors to corresponding values described in the emotion parameter files 200 B to 200 E of the corresponding “growth stages” every time this information is supplied.
  • the instinct model 184 is adapted to cyclically update the parameter values of the respective desires on the basis of the equation (5) by employing the values of coefficient k i ′ described in the instinct parameter file 201 A in the “tweety period”, during an initial time (that is to say, the “growth stage” is a stage of the “tweety period”).
  • the instinct model 184 is adapted to update the values of coefficient k i ′ in the equation of (5) for each desire to corresponding values described in the instinct parameter files 201 B to 201 E of the corresponding “growth stages” every time the notification is supplied thereto.
  • desires of the “appetite”, “the affection desire” and “curiosity” can be represented as actions.
  • the “young period” and the “adult period” all the desires can be expressed as actions.
  • the pet robot 110 having the above described configuration can express only a part of the emotional behaviors and desires of six emotional behaviors and four desires as the actions during an initial time. Then, as the “growth stage” rises, the number of emotional behaviors and desires which can be expressed as the actions increases.
  • the emotion and instinct of the pet robot 110 also “grow” with the “growth” of the actions, the “growth” can be more biologically and naturally expressed, and the user can enjoy the processes thereof.
  • the emotion and instinct models of the pet robot 110 first start from two simple emotional behaviors and one desire, the user can grasp the actions of the pet robot 110 with ease. Then, when the user is accustomed to the emotion and instinct models, the emotion and instinct models become complicated little by little, so that the user can readily understand or meet the emotion and instinct models in each “growth stage”.
  • the present invention is not limited thereto, and can be widely applied to, for instance, a two-footed robot or other various kinds of robot apparatuses.
  • the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise in, for instance, the behavioral model library 180 or the learning module 182 .
  • the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise, in addition thereto or in place thereof, on the basis of other conditions (for example, a desired action can be successfully done) except the above.
  • the present invention is not limited thereto, and the action may be generated on the basis of one behavioral model and various kinds of other configurations may be broadly applied to the configuration of the action generating means.
  • reference numeral 205 generally denotes a pet robot according to a fourth embodiment, which is configured similarly to the pet robot 110 according to the third embodiment except a point that the sensitivity of desires and emotional behaviors is changed in accordance with a circumferential state or actions from a user or the like.
  • the equation (4) or (5) is used.
  • an external output sensor data from various kinds of sensors, image data, audio data, etc.
  • an external output is greatly dependent upon an environment in which the pet robot 110 is present or a manner by which a user treats the pet robot.
  • the pet robot 110 is brought to a state in which the emotion of “anger” is amplified for most of time, because he is “hit”, so that the emotion of “joy” is not expressed as an action or operation when he is only unexpectedly “patted”. Accordingly, for the pet robot 110 positioned in such a state, the sensitivity of “joy” is raised more than that of “anger” so that the number and kinds of emotions expressed as the actions or operations need to be adjusted not to be biased.
  • the parameter values of each emotional behavior and each desire are respectively separately integrated for a long time, these integrated values are compared with each other between the parameters and the sensitivity of the emotional behavior or the desire is raised or lowered when the rate of the integrated value relative to the total is extremely large or small, so that all emotional behaviors and desires can be equally expressed as actions or operations so as to meet the environment or the manner by which the user treats the pet robot.
  • the emotion model 207 of an application layer 206 shown in FIG. 22 sequentially calculates the integrated values E k ′′ of parameter values for the respective emotional behaviors after a power source is finally turned on and the lapse of time T all ′′ after the power source is finally turned on at intervals of prescribed periods ⁇ T′ (for instance, at intervals of 1 to 2 minutes) in accordance with following mathematical equations.
  • E k ′′ E k ′+E k ( t )′ ⁇ T′ (7)
  • T all ′′ T all ′+ ⁇ T′ (8)
  • the emotion model 207 integrates the integrated values E k ′′ of each emotional behavior and the integrated value T all ′′ of the lapse of time, upon stop of turning on the power source of the pet robot 205 (upon shut-down of a system), respectively with the corresponding total integrated value E k(TOTAL) of total integrated value E k(TOTAL) for each emotional value or the total integrated value T all(TOTAL) which are stored in respectively prepared files (called them total emotional behavior integrated value files, hereinafter), and stores these total integrated values.
  • the emotion model 207 reads out the total integrated value T all(TOTAL) of the lapse of time from the total emotional behavior integrated value file, every time the power of the pet robot 205 is turned on.
  • T all(TOTAL) exceeds a preset threshold value (for example, 10 hours)
  • the total integrated value E k(TOTAL) of each emotional behavior stored in the total emotional behavior integrated value file is evaluated. Specifically, the evaluation is executed by calculating the rate of the total integrated value E k(TOTAL) of the emotional behavior relative to the total value ( ⁇ E k(TOTAL) ) of the total integrated value E k(TOTAL) of each emotional behavior.
  • the emotion model 207 raises, the value of the coefficient k e ′ of the emotional behavior described in the emotion parameter files 200 A to 200 E of the corresponding “growth stage” of the emotion parameter files 200 A to 200 E described above referring to FIGS. 28 (A) to 28 (E) by a prescribed amount (for instance, 0.1).
  • the value of this coefficient k e ′ is lowered by a prescribed amount (for instance, 0.1). In such a way, the emotion model 207 adjusts the coefficient k e ′ indicating the sensitivity of the emotional behavior.
  • the threshold value can be set with a tolerance to some degree for each emotional behavior so that the individuality of each robot is not injured.
  • 10 [%] to 50 [%] relative to the total value ( ⁇ E k(TOTAL) ) of the total integrated value E k(TOTAL) of each emotional behavior is set, as for “sadness”, 5 [%] to 20 [%] relative to the total value is set, and as for “anger”, 10 [%] to 60 [%] relative thereto is set.
  • the emotion model 207 when the emotion model 207 completes the same processing for all the emotional behaviors, the emotion model 207 returns the total integrated value E k(TOTAL) of each emotional behavior and the total integrated value T all(TOTAL) of the lapse of time to “0”. Then, while the emotion model 207 changes the parameter values of each emotional behavior in accordance with the equation (4) by using a newly determined coefficient k e ′ for each emotional behavior, the emotion model 207 newly starts the integration of the parameter values of each emotional behavior and the lapse of time in accordance with the equation (7), and then, repeats the same processing as mentioned above.
  • the sensitivity of each emotional behavior is changed, so that all the emotional behaviors can be equally expressed as the actions operations so as to meet the environment or a manner by which the user treats the pet robot.
  • the instinct model 208 ( FIG. 22 ) sequentially calculates the integrated values I k ′′ of parameter values for respective desires after the power source is finally turned on and sequentially calculates the integrated value T all ′′ of the lapse of time after the power source is finally turned on in accordance with the equation (8) at intervals of prescribed periods ⁇ T′ (for instance, at intervals of 1 to 2 minutes) in accordance with a following mathematical equation.
  • I k ′′ I K ′+I k ( t )′ ⁇ T′ (9)
  • the instinct model 208 integrates the integrated values I k ′′ of each desire and the integrated value T all′′ of the lapse of time, upon stop of turning on the power source of the pet robot 205 respectively with the corresponding total integrated value I k(TOTAL) of total integrated value I k(TOTAL) for each desire or the total integrated value T all(TOTAL) of the lapse of time which are stored in respectively prepared files (called them files for integrating desires, hereinafter) and stores these total integrated values.
  • the instinct model 208 reads out the total integrated value T all(TOTAL) of the lapse of time from the total desire integrated value file, every time the power of the pet robot 205 is turned on.
  • the total integrated value T all(TOTAL) exceeds a preset threshold value (for example, 10 hours)
  • the total integrated value I k(TOTAL) of each desire stored in the total desire integrated value file is evaluated. Specifically, the evaluation is executed by calculating the rate of the total integrated value I k(TOTAL) of the desire relative to the total value ( ⁇ I k(TOTAL) ) of the total integrated value I k(TOTAL) of each desire.
  • the instinct model 208 raises the value of the coefficient k i ′ of the desired described in the desire parameter files 201 A to 201 E of the corresponding “growth stage” of the desire parameter files 201 A to 201 E described above referring to FIGS. 30 (A) to 30 (E) by a prescribed amount (for instance, 0.1).
  • the value of this coefficient k i ′ is lowered by a prescribed amount (for instance, 0.1).
  • the threshold value at this time is set with a tolerance to some degree for each desire as mentioned above.
  • the instinct model 208 when the instinct model 208 completes the same processing for all the desires, the instinct model 208 returns the total integrated value I k(TOTAL) of each desire stored in the total desire integrated value file and the total integrated value T all(TOTAL) of the lapse of time to “0”. Then, while the instinct model 208 changes the parameter values of each desire in accordance with the equation (5) by using a newly determined coefficient k i ′ for each desire, the instinct model 208 newly starts the integration of the parameter values of each desire and the integration of the lapse of time in accordance with the equation (9), and then, repeats the same processing as mentioned above.
  • the sensitivity of each desire is changed, so that all the desires can be equally expressed as the actions or operations so as to meet the environment or a manner by which the user treats the pet robot.
  • the parameter values of each emotional behavior and each desire are respectively sequentially integrated and the sensitivity of each emotional behavior and desire are changed at intervals of prescribed periods ⁇ T′ on the basis of the integrated result.
  • the pet robot 205 all the desires can be equally expressed as the actions or operations so as to meet the environment and a manner according to which the user treats the pet robot. Therefore, this pet robot 205 has high amusement characteristics, as compared with the pet robot 110 in the third embodiment.
  • the parameter values of each emotional behavior and each desire are sequentially integrated and the sensitivity of each emotional behavior and desire are changed at intervals of prescribed periods ⁇ T′ on the basis of the integrated result. Therefore, all the desires can be equally expressed as the actions or operations so as to meet the environment and a way by which the user treats the pet robot. Thus, the pet robot capable of improving much more an amusement characteristic.
  • the present invention is not limited thereto, and can be widely applied to, for instance, a two-footed robot or various kinds of other robot apparatuses than it.
  • the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise in, for instance, the behavioral model library 180 or the learning module 182 .
  • the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise, in addition thereto or in place thereof, on the basis of other conditions (for example, a desired action can be successfully done) except the above.
  • the present invention is not limited thereto, and the number of emotional behaviors and desires used for generating the action may be initially decreased or decreased halfway (for instance, the “growth stage” such as an “old age period” is provided after the “adult period” so that the number of emotions or desires is decreased upon shift of the “growth stage” to the “old age period”).
  • the behavioral model changing means for changing the behavioral models 180 k(1) to 180 k(5) ( FIG. 26 ) to the behavioral models 180 k(2) to 180 k(5) high in growth level on the basis of the accumulation of prescribed stimulation (for instance, “hit” or “pat”) and the lapse of time comprises the learning module 182 and the behavioral model library 180 , needless to say, the present invention is not limited thereto and various kinds of other configurations can be widely applied.
  • the emotion model 207 and the instinct model 208 as the emotional behavior or desire updating means update the parameter values of the respective emotional behaviors and desires on the basis of the externally applied stimulation and the lapse of time
  • the present invention is not limited thereto and the parameter values of the respective emotional behaviors and desires may be updated on the basis of conditions other than the above conditions.
  • the present invention is not limited thereto and the sensitivity to each emotional behavior and each desire may be updated by any method other than the above method.
  • the present invention is not limited thereto and the environment may be evaluated on the basis of the number of times or the frequency of specific external stimulation such as “hit” or “patted” and a variety of methods other than the above methods may be widely applied as a method for evaluating the environment.
  • the present invention is applicable to a pet robot which acts like a quadruped animal.
  • a pet robot comprises detecting means 215 which detects an output from another pet robot, character discriminating means 216 which discriminates a character of the other pet robot on the basis of a detection result obtained by the detecting means 215 and character changing means 217 which changes a character on the basis of a discriminating result obtained by the character discriminating means 216 .
  • the pet robot discriminates the character of the other robot apparatus by the character discriminating means 216 on the basis of the detection result of the output from the other pet robot obtained by the detecting means 215 .
  • the pet robot is capable of changing a character of its own by the character changing means 217 on the basis of the discriminating result of the character of the other pet robot.
  • the detecting means 215 detects an emotion or the like expressed by an action of the other pet robot and the character changing means 217 changes the character by changing parameters or the like of an emotion model which determines an action of the pet robot itself deriving from an emotion.
  • the pet robot is capable of changing the character of its own on the basis of an action or the like of the other pet robot as described above.
  • the pet robot has a character which is shaped like that of a true animal and can act on the basis of the character.
  • a pet robot 210 is configured as a whole as shown in FIG. 32 , and composed by coupling a head unit 211 which corresponds to a head, a main body unit 212 which corresponds to a trunk, leg units 213 A through 213 D which correspond to legs and a tail unit 214 which corresponds to a tail so that the pet robot 210 acts like a true quadruped animal by moving the head unit 211 and the leg units 213 A through 213 D and the tail unit 214 relative to a main body unit 212 .
  • image recognizing section 220 which correspond to eyes and consist, for example, of CCD (Charge Coupled Device) cameras for picking up an image
  • microphones 221 which correspond to ears for collecting a voice
  • speaker 222 which corresponds to a mouth for giving sounds.
  • a remote controller receiver 223 which receives a command transmitted from a user by way of a remote controller (not shown), a touch sensor 224 which detects touch with a user's hand or the like, and an image display 225 which displays an internally generated image.
  • a battery 231 is attached to a location of the main body unit 212 corresponding to a ventral side and an electronic circuit (not shown) is accommodated in the main body unit 212 for controlling the action of the pet robot 210 as a whole.
  • Joint portions of the leg units 213 A through 213 D, coupling portions between the leg units 213 A through 213 D and the main body unit 212 , a coupling portion between the main body unit 212 and the head unit 211 , a coupling portion between the main body unit 212 and the tail unit 214 and the like are coupled with actuators 233 A through 233 N respectively which are driven under control by the electronic circuit accommodated in the main body unit 212 .
  • the actuators 233 A through 233 N By driving the actuators 233 A through 233 N, the pet robot 210 is let to act like a true quadruped animal as described above while swinging the head unit 211 up, down, left and right, wagging the tail unit 214 , walking and running by moving the leg units 213 A through 213 D.
  • the head unit 211 has a command receiving section 240 which comprises a microphone 221 and a remote controller receiver 223 , an external sensor 241 which comprises an image recognizing section 220 and a touch sensor 224 , a speaker 222 and an image display 225 .
  • the main body unit 212 has a battery 231 , and comprises a controller 242 which controls an action of the pet robot 210 as a whole and an internal sensor 245 which comprises a battery sensor 243 for detecting a residual amount of the battery 231 and a heat sensor 244 for detecting heat generated in the pet robot 210 .
  • actuators 233 A through 233 N are disposed at predetermined locations respectively in the pet robot 210 .
  • the command receiving section 240 is used for receiving commands given from the user to the pet robot 210 , for example, commands such as “walk”, “prostrate” and “chase a ball”, and configured by the remote controller receiver 223 and the microphone 221 .
  • a remote controller (not shown) transmits an infrared ray corresponding to the above described input command to the remote controller receiver 223 .
  • the remote controller receiver 223 Upon receiving this infrared ray, the remote controller receiver 223 generates a reception signal SIA and sends out this signal to the controller 242 .
  • the microphone 221 collects sounds emitted from the user, generates an audio signal SIB and sends out this signal to the controller 242 .
  • the command receiving section 240 In response to a command given from the user to the pet robot 210 , the command receiving section 240 generates a command signal S 1 comprising the reception signal SIA and the audio signal SIB, and supplies the command signal to the controller 242 .
  • the touch sensor 224 of the external sensor 241 is used for detecting a spurring from the user to the pet robot 210 , for example, spurring of “tapping” or “striking” and when the users makes a desired spurring by touching the above described touch sensor 224 , the touch sensor 224 generates a touch detection signal S 2 A corresponding to the above described spurring and sends out this signal to the controller 242 .
  • the image recognizing section 220 of the external sensor 241 is used for detecting a discriminating result of an environment around the pet robot 210 , for example, surrounding environment information such as “dark” or “a favorite toy is present” or a movement of another pet robot such as “another pet robot is running”, photographs an image around the above described pet robot 210 and sends out an image signal S 2 B obtained as a result to the controller 242 .
  • This image recognizing section 220 captures an action which expresses an emotion of the other pet robot.
  • the external sensor 241 generates an external information signal S 2 comprising the touch detection signal S 2 A and the image signal S 2 B in response to external information given from outside the pet robot 210 as described above, and sends out the external information signal to the controller 242 .
  • the internal sensor 245 is used for detecting an internal condition of the pet robot 210 itself, for example, an internal condition of “hungry” meaning a lowered battery capacity or “fevered”, and is configured by the battery sensor 243 and the heat sensor 244 .
  • the battery sensor 243 is used for detecting a residual amount of the battery 231 which supplies power to each circuit of the pet robot 210 and sends out a battery capacity detection signal S 3 A to the controller 242 as a detection result.
  • the heat sensor 244 is used for detecting heat in the pet robot 210 and sends out a heat detection signal S 3 B to the controller 242 as a detection result.
  • the internal sensor 245 generates an internal information signal S 3 comprising the battery capacity detection signal S 3 A and the heat detection signal S 3 B in correspondence to internal information of the pet robot 210 as described above and sends out the internal information signal S 3 to the controller 242 as described above.
  • the controller 242 On the basis of the command signal S 1 supplied from the command receiving section 240 , the external information signal S 2 supplied from the external sensor 241 and the internal information signal S 3 supplied from the internal sensor 245 , the controller 242 generates control signals S 5 A through S 5 N for driving the actuators 233 A through 233 N and sends out the control signals to the actuators 233 A through 233 N, thereby making the pet robot 210 act.
  • the controller 242 generates a audio signal S 10 and an image signal S 11 to be output outside as occasion demands, and informs required information to the user by outputting the audio signal S 10 outside by way of the speaker 222 and sending out the image signal S 11 to the image display 225 , thereby providing a desired image on the display.
  • the controller 242 processes the command signal S 1 supplied from the command receiving section 240 , the external information signal S 2 supplied from the external sensor 241 and the internal information signal S 3 supplied from the internal sensor 245 , and supplies a control signal S 5 obtained as a result, to the actuators 23 .
  • Functions of the controller 242 for the data processing are classified into an emotion and instinct model section 250 used as emotion and instinct model changing means, a behavior determination mechanism section 251 used as action state determining means, a posture transition mechanism section 252 used as posture transition means and a control mechanism section 253 as shown in FIG. 34 , and the controller 242 inputs the command signal S 1 supplied from outside, the external information signal S 2 and the internal information signal S 3 to the emotion and instinct model section 250 and the behavior determination mechanism section 251 .
  • the emotion units 260 A through 260 C express degrees of emotions as intensities, for example, from 0 to 100 levels, and change the intensities of the emotions from one minute to the next on the basis of the command signal S 1 , the external information signal S 2 and the internal information signal S 3 which are supplied. Accordingly, the emotion and instinct model section 250 expresses an emotion state of the pet robot 210 by combining the intensities of the emotion units 260 A through 260 C which change from one minute to the next, thereby modeling changes with time of the emotions.
  • the desire unit 261 A expresses a desire of “appetite”
  • the desire unit 261 B expresses a desire of “desire for sleep”
  • the desire unit 261 C expresses a desire of “desire for movement”.
  • the desire unit 261 A through 261 C express degrees of desires as intensities, for example, from 0 to 100 levels, and change the intensities of the desires from one minutes to the next on the basis of the command signal S 1 , the external information signal S 2 and the internal information signal S 3 which are supplied.
  • the emotion and instinct model section 250 expresses an instinct state of the pet robot 210 by combining the intensities of the desire units 261 A through 261 C which change from one minutes to the next, thereby modeling changes with time of the instincts.
  • the emotion and instinct model section 250 changes the intensities of the emotion units 260 A through 260 C and the desire units 261 A through 261 C as described above on the basis of input information S 1 through S 3 which comprises the command signal S 1 , the external information signal S 2 and the internal information signal S 3 .
  • the emotion and instinct model section 250 determines the emotion state by combining the changed intensities of the emotion units 260 A through 260 C, determines the instinct state by combining the changed intensities of the desire units 261 A through 261 C, and sends out determined emotion state and instinct state to the behavior determination mechanism section 251 as emotion and instinct state information S 10 .
  • the emotion and instinct model section 250 combines emotion units which are desired in the basic emotion group 260 so as to restrain or stimulate each other, thereby changing an intensity of one of the combined emotion units when an intensity of the other emotion unit is changed and realizing the pet robot 210 which has natural emotions.
  • the emotion and instinct model section 250 combines the “delight” emotion unit 260 A with the “sadness” emotion unit 260 B so as to restrain each other as shown in FIG. 36 , thereby enhancing an intensity of the “delight” emotion unit 260 A when the pet robot 210 is praised by the user and lowering an intensity of the “sadness” emotion unit 260 B as the intensity of the “delight” emotion unit 260 A is enhanced even when the input information S 1 through S 3 which changes the intensity of the “sadness” emotion unit 260 B is not supplied at this time.
  • the emotion and instinct model section 250 naturally lowers an intensity of the “delight” unit 260 A as an intensity of the “sadness” emotion unit 260 B is enhanced when the intensity of the “sadness” emotion unit 260 B is enhanced.
  • the emotion and instinct model section 250 combines the “sadness” emotion unit 260 B with the “anger” emotion unit 260 C so as to stimulate each other, thereby enhancing an intensity of the “anger” emotion unit 260 C when the pet robot is struck by the user and enhancing an intensity of the “sadness” emotion unit 260 B as the intensity of the “anger” emotion unit 260 C is enhanced even when the input information S 1 through S 3 which enhances the intensity of the “sadness” emotion unit 260 B is not supplied at this time.
  • the emotion and instinct model section 250 similarly enhances an intensity of the “anger” emotion unit 260 C naturally as the intensity of the “sadness” emotion unit 260 B is enhanced.
  • the emotion and instinct model section 250 combines desire units desired in the basic desire group 261 so as to restrain or stimulate each other, thereby changing an intensity of one of the combined desire units when an intensity of the other desire unit is changed and realizing the pet robot 210 which has natural instincts.
  • action information S 12 representing a current action or a past action of the pet robot 210 itself, for example, “walked for a long time” is supplied to the emotion and instinct model section 250 from the behavior determination mechanism section 251 disposed at a later stage so that the emotion and instinct model section 250 generates emotion and instinct state information S 10 which is different dependently on an action of the pet robot represented by the above described action information S 12 even when identical input information S 1 through S 3 is given.
  • the emotion and instinct model section 250 has intensity increase/decrease functions 265 A through 265 C which are disposed at a stage before the emotion units 260 A through 260 C as shown in FIG. 37 and generate intensity information S 14 A through S 14 C to enhance and/or lower intensities of the emotion units 260 A through 260 C on the basis of the action information S 12 representing an action of the pet robot 210 and the input information S 1 through S 3 , and enhances and/or lowers the intensities of the emotion units 260 A through 260 C in correspondence to the intensity information S 14 A through S 14 C output from the above described intensity increase/decrease functions 265 A through 265 C.
  • the emotion and instinct model section 250 enhances an intensity of the “delight” emotion unit 260 A, for example, when the pet robot makes a courtesy to the user and is tapped on the head, that is, when the action information S 12 representing a courtesy to the user and the input information S 1 through S 3 representing being tapped on the head are given to the intensity increase/decrease function 55 A, whereas the emotion and instinct model section 250 does not change the intensity of the “delight” emotion unit 260 A even when the pet robot is tapped on the head during execution of some task, that is, when the action information S 12 representing execution of a task and the input information S 1 through S 3 representing being tapped on the head are given to the intensity increase/decrease function 265 A.
  • the emotion and instinct model section 250 determines intensities of the emotion units 260 A through 260 C while referring not only to the input information S 1 through S 3 but also to the action information S 12 representing the current or past action of the pet robot 210 , thereby being capable of preventing an unnatural emotion from arising which enhances an intensity of the “delight” emotion unit 260 A, for example, when the user taps on the head for mischief during execution of some task.
  • the emotion and instinct model section 250 is configured to similarly enhance and/or lower intensities also of the desire units 261 A through 261 C respectively on the basis of the input information S 1 through S 3 and the action information S 12 which are supplied.
  • the intensity increase/decrease functions 265 A through 265 C When the input information S 1 through S 3 and the action information S 12 are input, the intensity increase/decrease functions 265 A through 265 C generate and output the intensity information S 14 A through S 14 C as described above dependently on parameters which are preliminarily set, thereby making it possible to breed the above described pet robot 210 so as to be a pet robot having an individuality, for example, a pet robot liable to get angry or a pet robot having a cheerful character by setting the above described parameters of the pet robot 210 at different values.
  • parameters of the emotion model can be changed dependently on a character of another robot apparatus (hereinafter referred to as a mate robot).
  • a character of the pet robot 210 is changed through interactions with the mate robot, thereby shaping the character with a characteristic such as “He that touches pitch shall be defiled”. It is therefore possible to realize the pet robot 210 having a character which is formed naturally.
  • the emotion recognition mechanism section 271 recognizes whether or not an action of the mate robot expresses some emotion as well as a kind and an intensity of an emotion when the action expresses an emotion.
  • the emotion recognition mechanism section 271 comprises a sensor which detects an action of the mate robot and an emotion recognizing section which recognizes an emotion of the mate robot on the basis of a sensor input from the sensor, captures the action of the mate robot with the sensor and recognizes the emotion of the mate robot from the sensor input with the emotion recognizing section.
  • the sensor input is, for example, the external information signal S 2 out of the above described input signals and may be the audio signal S 1 B from the microphone 221 shown in FIG. 33 or the image signal S 2 B from the image recognizing section 220 shown in FIG. 33 .
  • the emotion recognition mechanism section 271 recognizes an emotion expressed by sounds emitted from the mate robot or an action of the mate robot which is used as the sensor input.
  • the pet robot 210 has an action pattern for actions deriving from emotions of the mate robot as information and compares this action pattern with an actual action of the mate robot, for example, a movement of a moving member or an emitted sound, thereby recognizing an emotion expressed by an action of the mate robot.
  • the pet robot 210 has an action pattern for a movement of a foot of the mate robot when the above described mate robot is angry and detects an angry state of the mate robot when a movement of the foot of the mate robot which is obtained with the image recognizing section 220 is coincident with the action pattern.
  • Actions of the pet robot 210 have been determined, for example, on the basis of a preliminarily registered emotion model.
  • the actions of the pet robot 210 result from expressions of emotions.
  • the pet robot 210 is capable of comprehending an action as a result of what emotion of the mate robot from the action pattern for the mate robot.
  • the pet robot is easily capable of comprehending emotions of the mate robot.
  • the pet robot 210 is capable of recognizing that the mate robot is angry when the pet robot 210 detects an angry action, for example, an angry walking manner or an angry eye.
  • the emotion recognition mechanism section 271 sends information of an emotion expressed by the mate robot which is recognized as described above to the memory and analysis mechanism section 272 .
  • the memory and analysis mechanism section 272 judges a character of the mate robot, for example, an irritable character or a pessimistic character. Specifically, the memory and analysis mechanism section 272 stores the information sent from the emotion recognition mechanism section 271 and analyzes a character of the mate robot on the basis of a change of the information within a certain time.
  • the memory and analysis mechanism section 272 takes out information within the certain time from information stored in a data memory (not shown) and analyzes an emotion expression ratio.
  • information of an emotion related to “anger” is obtained at a ratio shown in FIG. 39 , for example, the memory and analysis mechanism section 272 judges that the mate robot has a character which is liable to be angry.
  • the memory and analysis mechanism section 272 sends information of a character of the mate robot obtained as described above to the emotion parameter change mechanism section 273 .
  • the parameters of the above described intensity increase/decrease functions 265 A through 265 C may be changed as the parameters related to the emotions. Since the intensity information S 14 A through S 14 C is generated from the input information S 1 through S 3 and the action information S 12 in accordance with the changed parameters of the intensity increase/decrease functions 265 A through 265 C in this case, it is possible to breed the pet robot 210 so as to have, for example, a character which is liable to be angry or cheerful.
  • the behavior determination mechanism section 251 actually makes transition of a current state to a next state when predetermined trigger is made.
  • the trigger are a definite time reached by an execution time of an action at a current state, input of specific input information S 14 , and a predetermined threshold value exceeded by an intensity of a unit desired out of intensities of the emotion units 260 A through 260 C and the desire units 261 A through 261 C which is represented by the emotion and instinct state information S 10 supplied from the emotion and instinct model section 250 .
  • the behavior determination mechanism section 251 selects a transition destination state on the basis of whether or not a predetermined threshold value is exceeded by an intensity of a unit desired out of intensities of the emotion units 260 A through 260 C and the desire units 261 A through 261 C which are represented by the emotion and instinct state information S 10 supplied from the emotion and instinct model section 250 . Accordingly, the behavior determination mechanism section 251 is configured to make transition to a state which is different dependently on intensities of the emotion units 260 A through 260 C and the desire units 261 A through 261 C even when an identical command signal S 1 is input.
  • the behavior determination mechanism section 251 detects a palm stretched before the eye and an intensity of the “anger” emotion unit 260 C not lower than the predetermined threshold value, for example, the behavior determination mechanism section 251 generates the action command information S 16 which allows the pet robot to take an action of “looking aside in anger” whether or not “pet robot is not hungry”, that is, whether or not the battery voltage is not lower than the predetermined threshold value.
  • the parameters of the emotion model intensities of the emotion units
  • intensities of the emotion model often exceed the predetermined threshold value and the pet robot 210 often takes the action of “looking aside in anger”
  • the input information S 1 through S 3 which comprises the command signal S 1 , the external information signal S 2 and the internal information signal S 3 is input not only into the emotion and instinct model section 250 but also into the behavior determination mechanism section 251 since this information has contents which are different dependently on timings of input into the emotion and instinct model section 250 and the behavior determination mechanism section 251 .
  • the controller 242 When the external information signal S 2 of “tapped on the head” is supplied, for example, the controller 242 generates the emotion and instinct state information S 10 of “delight” with the emotion and instinct model section 250 and supplies the above described emotion and instinct state information S 10 to the behavior determination mechanism section 251 , but when the external information signal S 2 of “a hand is present before the eye” is supplied in this state, the controller 242 generates action command information S 16 of “willing to lend a hand” in the behavior determination mechanism section 251 on the basis of the above described information S 10 of “delight” and the external information signal S 2 of “a hand is present before the eye, and sends out the action command information S 16 to the posture transition mechanism section 42 .
  • Postures to which transition is possible are classified into those to which direct transition is possible from a current posture and others to which the direct transition is impossible from the current posture. From a lying down state of the quadruped pet robot 210 with the four feet largely thrown out, for example, the direct transition is possible to a prostrating state, but impossible to a standing state, and the pet robot 210 must take actions in two stages of once drawing the hands and feet near the body and then standing up. Furthermore, there are postures which cannot be taken safely. The quadruped pet robot 210 easily falls down, for example, when the pet robot 210 attempts to give a hurrah with two forefeet raised in a standing posture.
  • the posture transition mechanism section 252 plans posture transition by searching for a path from the node ND 2 of “prostration” to a node ND 4 of “walking”, generates the action command information S 18 which emits a command of “stand up” and then a command of “walk”, and sends out the action command information S 18 to the control mechanism section 253 .
  • the emotion and instinct model section (emotion model section shown in FIG. 38 ) 250 is capable of determining an action dependently on a character which is changed in correspondence to a character of the mate robot. Accordingly, the user can enjoy a character formation process of the pet robot 210 in correspondence to another robot and gain an interest in breeding.
  • the behavior determination mechanism section 251 of the controller 242 determines a next state successive to a current state on the basis of the a current state corresponding to a history of the input information S 14 which is supplied sequentially and the input information S 14 which is supplied next, thereby allowing the pet robot 210 to autonomously act on the basis of states of the emotions and instincts of its own.
  • the posture transition mechanism section 252 of the controller 242 makes transition from a current posture of the pet robot 210 to a posture corresponding to the action command information S 16 by changing the current posture through a predetermined path, thereby avoiding the event to take an unreasonable posture or the event of falling down.
  • the above described configuration changes states of an emotion and an instinct of the pet robot 210 on the basis of the input information S 1 through S 3 supplied to the controller 242 , determines an action of the pet robot 210 on the basis of changes of states of the emotion and the instinct, selects a posture to which transition is possible dependently on the above described determined action and moves the pet robot 210 , thereby making it possible to realize the pet robot 210 which is capable of autonomously acting on the basis of emotions and instincts of its own, and taking actions quite similar to those of a true pet.
  • a character state is changed only on the basis of an emotion expressed by the mate robot in the above described embodiment
  • the present invention is not limited by the embodiment and reference can be made also to other information.
  • the character state can be changed, for example, by disposing a dialogue analysis mechanism section 274 as shown in FIG. 43 and analyzing dialogue between the mate robot and the user (owner).
  • the present invention is not limited by the embodiment and it is possible to change the character of the pet robot 210 to as to be opposed to the character of the mate robot, that is, in a reverse direction.
  • the present invention is not limited by the embodiment and the mate robot may be in a plurality.
  • the pet robot 210 is capable of discriminating robots and changing a character of its own collectively from the plurality of robots or individually from specific robots. Accordingly, the pet robot 210 is capable of changing the character of its own dependently on characters of the plurality of mate robots when these robots have characters which are different from one another.
  • the pet robot 210 receives a user's command sent from the remote controller with the infrared ray
  • the present invention is not limited by the embodiment and the pet robot 210 may be configured to receive a user's command sent, for example, with a radio wave or an acoustic wave.
  • the present invention is not limited by the embodiment and it is possible, for example, to connect a computer to the pet robot 210 and input a user's command via the above described connected computer.
  • the present invention is not limited by the embodiment and it is possible to add an emotion unit expressing an emotion of “loneliness” to the emotion units 260 A through 260 C and add an desire unit expressing “desire for love” to the desire units 261 A through 261 C or determine states of an emotion and an instinct using a combination of other various kinds of emotion units and desire units.
  • a next action is determined by the behavior determination mechanism section 251 on the basis of the command signal S 1 , the external information signal S 2 , the internal information signal S 3 , the emotion and instinct state information S 10 and the action information S 12 in the above described embodiment
  • the present invention is not limited by the embodiment and a next action may be determined on the basis of some information out of the command signal S 1 , the external information signal S 2 , the internal information signal S 3 , the emotion and instinct state information S 10 and the action information S 12 .
  • the present invention is applied to robots which are used in an entertainment field such as a game and a exhibition, pet robots which are used as pets, and the like.

Abstract

In a robot apparatus and a control method therefor, firstly, partial or whole state space of a behavioral model is expanded or reduced, secondly, transition to a predetermined node in the behavioral model is described as transition to a virtual node and a node group to be allotted to the virtual node is sequentially changed, thirdly, the number of emotions and/or desires which are used for generating actions is gradually increased, and fourthly, an environment is evaluated to update each sensitivity corresponding to each emotion and desire, on the basis of the evaluated. In the robot apparatus and the character discriminating method for the robot apparatus, a pet robot is provided with: detecting means for detecting outputs from other pet robots; character discriminating means for discriminating characters of the pet robots on the basis of the result detected by the detecting means; and character changing means for changing the character on the basis of the result judged by the character discriminating means.

Description

    TECHNICAL FIELD
  • The present invention relates to a robot apparatus and a control method therefor, and a robot character discriminating method, and more particularly to, for example, a pet robot.
  • BACKGROUND ART
  • As the first background art, in recent years, a four-legged walking type pet robot has been proposed and developed by the applicant of the present application. Such a pet robot has a shape similar to a dog and a cat which are raised in a general home, and is adapted to be able to autonomously act in response to approach from the user such as “patting” or “petting,” a surrounding environment or the like.
  • Also, this pet robot is mounted with a learning function to change revelation probability of corresponding action on the basis of approaches such as “patting” and “petting” from the user, a growth function for stepwise changing the degree of difficulty of the action and the level of complicatedness on the basis of accumulation, elapsed time or the like of the approaches concerned, or the like to thereby provide high marketability and amusement characteristics as the “pet robot.”
  • In such a pet robot, there are prepared behavioral models consisting of individual probability state transition models for each growth stage (hereinafter, referred to as “growth stage”), and the behavioral model is switched to the behavioral model of the above described “growth stage” on the basis of approaches from the user, accumulation of elapsed time or the like to thereby express the “growth.” Also, in the pet robot, the transition probability at a corresponding place in the behavioral model is caused to change in response to approaches from the user is changed to thereby express the above described “learning.”
  • According to this method, however, the behavioral model is switched to a new behavioral model every time the pet robot “grows,” and therefore, the pet robot starts a new action for each “growth” as if the character was suddenly changed, and the result of the “learning” until then is to be canceled. This has led to a problem that the user, who has been used to and familiar with the action pattern until then, feels unnaturally.
  • Also, according to the method, even if there is any duplicate action pattern, it is necessary to prepare a behavioral model portion for the action pattern for each “growth stage,” and therefore, there is a problem that an operation for generating the behavioral model will become complicated by that much.
  • Therefore, if it is made possible to carry forward the action pattern or the learning result to the next “growth stage” during, for example, “growth,” it will be possible to get rid of such unnaturalness during “growth” as described above for expressing more organism-like “growth,” and it is considered that the entertainment characteristics could be that much improved.
  • Further, as the second background art, in recent years, a study for modeling the emotion of a human being on a computer and expressing the emotion has been vigorously promoted. As the robot technological attempts of such modeling, there have been known in Japan, the face robot of a laboratory of Fumio Hara in the Tokyo science college, or WAMOEBA 2 of a laboratory of Sugano in the Waseda University, the cat robot of OMURON Co., Ltd. or the like (“Model and Expression of Generation of Artificial Emotion” by Fumio Hara; Mathematical science, vol. 32, No. 7, page 52-58, 1994, “Study of Emotional Exchange between Human Being and Robot, Setting and Trial of Robot for Evaluation “WAMOEBA-2” by Ogata, Sugano; Lecture thesis of robotics and mechatronics lecture meeting of Japan mechanics learned society, vol. A, 1996, pp 449-452, and “Interactive Pet Robot having Emotion” by Tajima, Saito, Osumi, Kudo, Shibata; Preparatory copies of science lecture meeting of Japan Robot learned society, vol. 16, page 11-12, 1998).
  • In these studies, their subjects reside in how behaviors or expressions similar to those of a living thing can be obtained by initially employing already completed emotion and instinct models. However, in the case where the growth processes of the living thing are taken into consideration, it is impossible to estimate that the emotion and instinct thereof always work on the basis of the same models from a BABY period to an adult period. Therefore, the above mentioned modeling of human emotion has involved an unnatural problem from the viewpoint of “growth”.
  • Further, in the case where an application that an autonomous robot treated as a pet is equipped with emotion or instinct is considered, it is difficult for a user to understand and accept the robot which has initially perfect emotion and instinct. For “example, in a robot having a plurality of desires (for instance, affection desire” and “exercise desire”, etc.) as the instinct and a plurality of emotional behaviors (for instance, “joy”, “sadness” and “fear”, etc.) as the emotion, since the internal motions of the robot are complicated, the user hardly understand what the robot currently wants or how the robot currently feels.
  • Further, in the above robot, if the emotion and the instinct of the robot do not change and are always the same, the user is liable to lose interest in the robot. Thus, the robot is disadvantageously insufficient in view of commercialization and amusement characteristics.
  • Furthermore, as the third background art, there have conventionally been proposed and developed the so-called quadruped walking type pet robots which act in accordance with users' commands and surrounding environments. This kind of pet robot has a form quite similar to that of a quadruped animal such as a dog or a cat which is bred at home and is configured to assume a posture of prostration when the robot receives an order of “prostrate” and take an action of “hand lending” when the user stretches out his hand before a mouth of the robot.
  • By the way, such a pet robot has a model of emotion as well as a mechanism which determines an action by himself and a feature of the pet robot which can be called as a character is changed under no influence due to another robot.
  • Here, the character of an animal is formed under influences due to surrounding environments and when two pets are bred together, for example, existence of a pet influences largely on forming of a character of the other pet in actual circumstances.
  • DISCLOSURE OF THE INVENTION
  • The present invention has been achieved in consideration of the above described points, and is aimed to propose: firstly, a robot apparatus and control method therefor which are capable of improving the entertainment characteristics; secondly, a robot apparatus and control method therefor which are capable of improving the amusement characteristics; thirdly, a robot apparatus and a robot character discriminating method which are capable of forming the character more real.
  • In order to solve such problems, a robot apparatus according to the present invention is provided with: memory means for storing behavioral model; and action generating means for generating actions by the use of partial or full state space of the behavioral model, and the action generating means is caused to change state space to be used for action generation, of the behavioral models while expanding or reducing the state space. As a result, this robot apparatus is capable of reducing discontinuity in action output before and after change in state space to be used for action generation because the state space to be used for action generation continuously changes. Thereby, output actions can be changed smoothly and naturally, thus making it possible to realize a robot apparatus which improves the entertainment characteristics.
  • Also, according to the present invention, in a robot apparatus having a behavioral model consisting of state transition models and for generating action on the basis of the behavioral model concerned, transition to a predetermined node in the behavioral model is described as transition to a virtual node consisting of imaginary nodes, a predetermined first node group is allocated to the virtual node concerned, and change means for changing a node group to be allocated to the virtual node is provided. As a result, in this robot apparatus, it is possible to provide consistency in the output action because the behavioral model, which becomes the basis, is fixed. Thereby, output actions can be changed smoothly and naturally, thus making it possible to realize a robot apparatus which improves the entertainment characteristics.
  • Further, in the present invention, a control method for the robot apparatus is provided with a first step for generating action by the use of partial or full state space of the behavioral model, and a second step for changing state space to be used for action generation, of the behavioral models while expanding or reducing the state space. As a result, according to the control method for this robot apparatus, it is possible to reduce discontinuity in action output before and after change in state space to be used for action generation because the state space to be used for action generation continuously changes. Thereby, output actions can be changed smoothly and naturally, thus making it possible to realize a control method for a robot apparatus which improves the entertainment characteristics.
  • Further, according to the present invention, in the control method for the robot apparatus., transition to a predetermined node in the behavioral model is described as transition to a virtual node consisting of imaginary nodes, and there are provided a first step for allocating a predetermined node group to the virtual node concerned, and a second step for changing the node group to be allocated to the virtual node. As a result, according to the control method for this robot apparatus, it is possible to change the output action with consistency because the basic behavioral model is determined. Thereby, output actions can be changed smoothly and naturally, thus making it possible to realize a control method for a robot apparatus which improves the entertainment characteristics.
  • Further, according to the present invention, there is provided a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model which are sequentially updated in accordance with prescribed conditions, the robot apparatus comprising restricting means for restricting the number of the emotional behaviors or the desires used for generating the action so as to increase or decrease them stepwise. As a result, according to this robot apparatus, the emotion and/or instinct can be changed as if the emotion and/or instinct of a real living thing “grew”.
  • Further, according to the present invention, there is provided a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model, the robot apparatus comprising emotional behavior and/or desire updating means for sequentially updating the parameter value of each emotional behavior and/or the parameter value of each desire, depending on corresponding sensitivity individually set to each emotional behavior and/or each desire, on the basis of externally applied stimulation and/or the lapse of time; and sensitivity updating means for evaluating an environment and respectively updating the sensitivity corresponding to each emotional behavior and/or each desire on the basis of the evaluated result. Consequently, according to this robot apparatus, the sensitivity of each emotional behavior and/or each desire can be optimized relative to the environment.
  • Still further, according to the present invention, there is provided a control method for a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model which are sequentially updated in accordance with prescribed conditions, the control method for the robot apparatus comprising: a fist step of restricting the number of the emotional behaviors and/or the desires used for generating the action during an initial time and a second step of increasing or decreasing stepwise the number of the emotional behaviors and/or the desires used for generating the action. As a result, according to the control method for a robot apparatus, the emotion and/or instinct can be changed as if the emotion and/or instinct of a real living thing “grew”.
  • Still further, according to the present invention, there is provided a control method for a robot apparatus which generates an action on the basis of the parameter value of each emotional behavior of an emotion model and/or the parameter value of each desire of an instinct model, the control method for a robot apparatus comprising a first step of updating the parameter value of each emotional behavior and/or the parameter value of each desire, depending on corresponding sensitivity individually set to each emotional behavior and/or each desire, on the basis of externally applied stimulation and the lapse of time; and a second step of evaluating an environment and respectively updating the sensitivity corresponding to each emotional behavior and/or each desire on the basis of the evaluated result. Therefore, according to this control method for a robot apparatus, the sensitivity of each emotional behavior and/or each desire can be optimized relative to the environment.
  • Further, the robot apparatus according to the present invention comprises detecting means for detecting an output from another robot and character discriminating means which discriminates a character of the other robot apparatus on the basis of a result detected by the detecting means. As a result, such a robot apparatus discriminates the character of the other robot apparatus on the basis of the detection result of the output from the other robot apparatus, which is detected by the detecting means, with the character discriminating means. Thereby, the robot apparatus can change own character based on the discrimination result of the character of the other robot apparatus, thus making it possible to realize a robot apparatus which can form its character more real.
  • Further, the character discriminating method for robot apparatus according to the present invention detects an output from a robot apparatus and discriminates a character of the above described robot apparatus on the basis of a detection result. Accordingly, using the character discriminating method for robot apparatus can realize the character discriminating method for robot apparatus which can change own character on the basis of the discrimination result of the character of another robot apparatus.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a perspective view showing an external appearance configuration of a pet robot according to a first and second embodiments.
  • FIG. 2 is a block diagram showing a circuit configuration of the pet robot.
  • FIG. 3 is a conceptual view showing software configuration of a control program.
  • FIG. 4 is a conceptual view showing software configuration of a middleware layer.
  • FIG. 5 is a conceptual view showing software configuration of an application layer.
  • FIG. 6 is a conceptual view for explaining a behavioral model library.
  • FIG. 7 is a schematic diagram showing probability automaton.
  • FIG. 8 is a chart showing a state transition table.
  • FIG. 9 is a conceptual view showing detailed configuration of the behavioral model library.
  • FIG. 10 is a conceptual view showing a growth model of the pet robot.
  • FIG. 11 is a conceptual view for explaining acquisition and forgetfulness of an action pattern along with growth.
  • FIG. 12 is a conceptual view for explaining a difference file in the first embodiment.
  • FIG. 13 is a conceptual view for explaining transition from a plurality of nodes to a starting point node of one action pattern.
  • FIG. 14 is a conceptual view for explaining utilization of a virtual node.
  • FIG. 15 is a conceptual view showing configuration of a behavioral model of each action-related conditional item in the second embodiment.
  • FIG. 16 is a conceptual view for explaining an action pattern file.
  • FIG. 17 is a conceptual view for explaining a difference file according to the second embodiment.
  • FIG. 18 is a perspective view showing the configuration of the external appearance of a pet robot according to a third and fourth embodiments.
  • FIG. 19 is a block diagram showing the circuit configuration of the pet robot.
  • FIG. 20 is a conceptual view showing the software configuration of a control program.
  • FIG. 21 is a conceptual view showing the software configuration of a middleware layer.
  • FIG. 22 is a conceptual view showing the software configuration of an application layer.
  • FIG. 23 is a conceptual view used for explaining a behavioral model library.
  • FIG. 24 is a schematic diagram showing a probability automaton.
  • FIG. 25 is a chart showing a state transition table.
  • FIG. 26 is a conceptual view showing the detailed configuration of the behavioral model library.
  • FIG. 27 is a conceptual view showing the growth model of the pet robot.
  • FIG. 28 is a conceptual view showing an emotion parameter file for each “growth stage”.
  • FIG. 29 is a flowchart used for explaining the growth of sensitivity and instinct.
  • FIG. 30 is a conceptual view showing an instinct parameter file for each “growth stage”.
  • FIG. 31 is a block diagram explaining the fifth embodiment.
  • FIG. 32 is a perspective view showing an embodiment of a pet robot according to the fifth embodiment.
  • FIG. 33 is a block diagram showing a circuit composition of a pet robot.
  • FIG. 34 is a schematic diagram showing data processing in a controller.
  • FIG. 35 is a schematic diagram showing data processing by an emotion and instinct model section.
  • FIG. 36 is a schematic diagram showing data processing by the emotion and instinct model section.
  • FIG. 37 is a schematic diagram showing data processing by the emotion and instinct model section.
  • FIG. 38 is a block diagram of component members for changing parameters of an emotion model in the above described pet robot.
  • FIG. 39 is a characteristic diagram showing an emotion expression ratio of a mate robot.
  • FIG. 40 is a state transition diagram of finite automaton in a behavior determination mechanism section.
  • FIG. 41 is a diagram showing a graph of posture transition in a posture transition mechanism section.
  • FIG. 42 is a block diagram showing the component members for changing the parameters of the emotion model in the pet robot described in the fifth embodiment and is descriptive of another embodiment for changing the parameters of the emotion model.
  • FIG. 43 is a block diagram of component members for changing the parameters of the emotion model in the pet robot described in the fifth embodiment, which comprises a dialogue analysis mechanism section for analyzing dialogue between a mate robot and a user.
  • FIG. 44 is a perspective view showing another embodiment of the pet robot described in the fifth embodiment.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Hereinafter, with reference to the drawings, the detailed description will be made of an embodiment according to the present invention.
  • (1) First Embodiment
  • (1-1) Configuration of Pet Robot According to First Embodiment
  • In FIG. 1, reference numeral 1 denotes a pet robot according to the first embodiment as a whole, which is configured such that leg units 3A to 3D are coupled to a trunk unit 2 in front and in rear, and on both sides thereof, and a head unit 4 and a tail unit 5 are coupled to the front end and rear end of the trunk unit 2 respectively.
  • The trunk unit 2, as shown in FIG. 2, contains a control unit 16 in which a CPU (Central Processing Unit) 10, a DRAM (Dynamic Random Access Memory) 11, a flash ROM (Read Only Memory) 12, a PC (Personal Computer) card interface circuit 13 and a signal processing circuit 14 are connected to each other with an internal bus 15, and a battery 17 as a power source for the pet robot 1. In addition, the trunk unit 2 contains an angular velocity sensor 18, an acceleration sensor 19 or the like for detecting the direction and acceleration of the movement of the pet robot 1.
  • Disposed at predetermined positions in the head unit 4 are a CCD (Charge Coupled Device) camera 20 for picking up external conditions; a touch sensor 21 for detecting pressure given by physical approaches such as “petting” and “patting” from the user; a distance sensor 22 for measuring a distance to a front object; a microphone 23 for collecting external sounds; a speaker 24 for outputting sounds such as barks; LEDs (Light Emitting Diode) (not shown) corresponding to “eyes” of the pet robot 1; and the like.
  • Further, on joint portions of each leg unit 3A to 3D, each coupled portion between each leg unit 3A to 3D and the trunk unit 2, a coupled portion between the head unit 4 and the trunk unit 2, a coupled portion between the tail unit 5 and the tail 5A, and the like, there are disposed actuators 25 1 to 25 n and potentiometers 26 1 to 26 n having several degrees of freedom.
  • Various sensors including the angular velocity sensor 18, acceleration sensor 19, touch sensor 21, distance sensor 22, microphone 23, speaker 24, and each potentiometer 26 1 to 26 n, the LEDs and the actuator 25 1 to 25 n are connected to the signal processing circuit 14 in the control unit 16 via corresponding hubs 27 1 to 27 n, and the CCD camera 20 and the battery 17 are directly connected to the signal processing circuit 14.
  • At this time, the signal processing circuit 14 successively captures sensor data, image data and audio data to be supplied from each of the above described sensors, and successively stores these data in predetermined positions of the DRAM 11 via the internal bus 15. The signal processing circuit 14 successively captures battery residual amount data indicating the battery residual amount to be supplied from the battery 17, together with those data to store them in a predetermined position of the DRAM 11.
  • Each sensor data, image data, audio data and battery residual amount data, which have been stored in the DRAM 11, will be utilized when the CPU 10 controls the operation of this pet robot 1 later.
  • Actually, the CPU 10 reads out, when power supply for the pet robot 1 is initially turned on, a control program stored in a memory card 28 loaded in the PC card slot (not shown) of the trunk unit 2 or in the flash ROM 12, via the PC card interface circuit 13 or directly,. to store it in the DRAM 11.
  • Then, on the basis of each sensor data, image data, audio data and battery residual amount data to be successively stored in the DRAM 11 from the signal processing circuit 14 as described above, the CPU 10 judges self conditions and surrounding conditions, the presence or absence of any instruction and approaches from the user, and the like.
  • Further, the CPU 10 determines a next action on the basis of this judgment result and the control program stored in the DRAM 11, and drives necessary actuators 25 1 to 25 n on the basis of the determination result concerned to thereby move the head unit 4 left, right, up or down, wag a tail 5A of the tail unit 5, and drive each leg unit 3A to 3D for walking among others.
  • At this time, the CPU 10 produces audio data as required, and gives it to the speaker 24 through the signal processing circuit 14 as an audio signal to thereby output voice based on the audio signal concerned outward, or to light, put out or blink the LED.
  • As described above, this pet robot 1 is adapted to be able to autonomously act depending on conditions of the self and surroundings and any instruction and approach from the user.
  • (1-2) Software Configuration of Control Program
  • FIG. 3 shows a software configuration of the control program in the pet robot 1. In this FIG. 3, a device driver layer 30 is located at the lowest layer of this control program, and is configured by a device driver set 31 consisting of a plurality of device drivers. In this case, each device driver is an object allowed to directly access hardware to be used in an ordinary computer such as the CCD camera 20 (FIG. 2) and a timer, and receives an interruption from corresponding hardware for processing.
  • A robotic server object 32 is located in the upper layer of the device driver layer 30, and is configured by: a virtual robot 33 consisting of a software group for providing an interface for accessing hardware such as, for example, various sensors and actuators 25 1 to 25 n described above; a power manager 34 consisting of a software group for managing switching or the like of the power supply; a device driver manager 35 consisting of a software group for managing other various device drivers; and a designed robot 36 consisting of a software group for managing the mechanism of the pet robot 1.
  • The manager object 37 is configured by an object manager 38 and a service manager 39. In this case, the object manager 38 is a software group for managing start and end of each software group included in the robotic server object 32, a middleware layer 40 and an application layer 41, and the service manager 39 is a software group for managing connections of the objects on the basis of information on connections between the objects written in a connection file stored in the memory card 28 (FIG. 2).
  • The middleware layer 40 is located in the upper layer of the robotic server object 32, and is configured by a software group for providing this pet robot 1 with basic functions such as image processing and audio processing. The application layer 41 is located in the upper layer of the middleware layer 40 and is configured by a software group for determining the actions of the pet robot 1 on the basis of results of processing obtained by processing by each software group constituting the middleware layer 40.
  • In this respect, FIG. 4 and FIG. 5 show concrete software configuration of the middleware layer 40 and the application layer 41 respectively.
  • The middleware layer 40 is, as is obvious also from FIG. 4, configured by: a recognition system 57 having signal processing modules 50 to 55 for musical scales recognition, for distance detection, for posture detection, for the touch sensor, for movement detection and for color recognition, an input semantics converter module 56 and the like; and an output system 65 having an output semantics converter module 57, signal processing modules 58 to 64 for posture management, for tracking, for operation playback, for walking, for falling-down and standing-up, for LED lighting and for sound reproducing, and the like.
  • In this case, each signal processing module 50 to 55 in the recognition system 57 captures corresponding data from among sensor data, image data and audio data to be read out from the DRAM 11 (FIG. 2) by a virtual robot 33 in the robotic server object 32, and carries out predetermined processing on the basis of the data to supply the processing results to the input semantics converter module 56.
  • On the basis of the processing result to be given from each of these signal processing modules 50 to 55, the input semantics converter module 56 recognizes the self conditions and the surroundings, such as “detecting a ball,” “fell down,” “has been petted,” “has been patted,” “hearing musical scales of do-re-mi-fa,” “detecting a moving object,” or “detecting an obstacle,” and any instruction and approach from the user, and outputs the recognition results to the application layer 41 (FIG. 2).
  • The application layer 41 is, as shown in FIG. 5, configured by five modules: a behavioral model library 70, an action switching module 71, a learning module 72, an emotion model 73 and an instinct model 74.
  • In this case, the behavioral model library 70 is, as shown in FIG. 6, provided with respectively-independent behavioral models 70 1 to 70 n, by bringing each of them into correspondence with pre-selected several conditional items respectively, such as “when the battery residual amount has got low,” “when standing up after falling down,” “when dodging an obstacle,” “when expressing an emotion,” and “when detecting a ball.”
  • When the recognition result is given from the input semantics converter module 56, or when a fixed time period has elapsed since the final recognition result was given, or the like, these behavioral models 70 1 to 70 n determine a next action while referring to a parameter value of corresponding emotion held in the emotion model 73, and a parameter value of a corresponding desire held in the instinct model 74 as described later, as required, to output the determination result to the action switching module 71.
  • In this respect, in the case of this embodiment, each behavioral model 70 1 to 70 n uses, as the technique of determining the next action, algorithm which is referred to as “probability automaton,” to probably determine to which node NODE0 to NODEn a transition is made from one node (state) NODE0 to NODEn as shown in FIG. 7, on the basis of transition probabilities P1 to Pn which have been set respectively for arcs ARC1 to ARCa1 connecting the nodes NODE0 to NODEn.
  • Concretely, each behavioral model 70 1 to 70 n has such a state transition table 80 for each of nodes NODE0 to NODEn as shown in FIG. 8 by bringing each of them into correspondence with each of nodes NODE0 to NODEn which form own behavioral models 70 1 to 70 n respectively.
  • In this state transition table 80, with respect to the nodes NODE0 to NODEn, input events (recognition result), which are transition conditions, are enumerated in order of priority in a column of “Input Event Name,” and the other transition conditions are described in the corresponding lines in columns of “Data Name” and “Data Range.”
  • Accordingly, in a node NODE100 represented on the state transition table 80 of FIG. 8, when recognition result “detecting a ball” has been given, the fact that the “SIZE” of the ball is within a range of “from 0 to 1000” to be given together with the recognition result, or when recognition result “detecting an obstacle” has been given, the fact that the “distance” to the obstacle to be given together with the recognition result is within a range of “from 0 to 100,” is a condition required to make a transition to another node.
  • Also, in this node NODE100, if any recognition result is not inputted, of parameter values of each emotion and each desire, which are held respectively by the emotion model 73 and the instinct model 74, to which the behavioral model 70 1 to 70 n periodically refers, the parameter value of any of “JOY,” “SURPRISE” and “SUDNESS,” which are held by the emotion model 73, is within a range of '50 to 100,” transition can be made to another node.
  • In the state transition table 80, in a row of “Transition Target Node” in a column of “Transition Probability to Another Node,” names of nodes, to which a transition can be effected from the node NODE0 to NODEn, are enumerated, and when all conditions described in columns of “Input Event Name,” “Data Name” and “Data Range” are met, transition probability of each of other nodes NODE0 to NODEn, to which the transition can be effected, is respectively described in a position corresponding thereto in the column of “Transition Probability to Another Node.” Action to be outputted when transiting to the node NODE0 to NODEn is described in a row of “Output Action” in the column of “Transition Probability to Another Node.” In this respect, the sum of probability in each column in the column of “Transition Probability to Another Node” is 100[%].
  • Accordingly, in the node NODE100 represented in the state transition table 80 of FIG. 8, when, for example, the recognition result that “detecting a ball” and the “SIZE” of the ball is within a range of “from 0 to 1000” has been given, it is possible to transit to “Node NODE120(node 120)” at a probability of “30[%]” and at the time, the action of “ACTION 1” is to be outputted.
  • Each behavioral model 70 1 to 70 n is configured such that the nodes NODE0 to NODEn described as such a state transition table 80 are connected in a great number, and when recognition result has been given from the input semantics converter module 56 among others, the next action is adapted to be determined by using probability through the use of the state transition table 80 for the corresponding nodes NODE0 to NODEn to output the determination result to the action switching module 71.
  • Of action to be outputted respectively from each behavioral model 70 1 to 70 n in the behavioral model library 70, the action switching module 71 selects action outputted from a predetermined higher-priority behavioral model 70 1 to 70 n, and transmits a command (hereinafter, referred to as “action command”) to the effect that the action concerned should be taken to the output semantics converter 57 in the middleware layer 40. In this respect, in this embodiment, a behavioral model 70 1 to 70 n shown on the lower side in FIG. 6 is set higher in priority level.
  • Also, the action switching module 71 notifies the learning module 72, the emotion model 73 and the instinct model 74 to the effect that the action has been completed on the basis of action-completed information to be given by the output semantics converter 57 after the action is completed.
  • On the other hand, of recognition results to be given by the input semantics converter 56, the learning module 72 inputs the recognition result of an instruction, received as approach from the user, such as “was patted” and “was petted.”
  • On the basis of this recognition result and a notice from the action switching module 71, the learning module 72 changes transition probability, to which the behavioral model 70 1 to 70 n corresponding thereto in the behavioral model library 70 corresponds in such a manner as to lower, when it “was patted (scolded),” the revelation probability of the action and to raise, when it “was petted (praised),” the revelation probability of the action.
  • On the other hand, concerning six emotions in total: “JOY,” “SADNESS,” “ANGER,” “SURPRISE,” “DISGUST” and “FEAR,” the emotion model 73 holds parameters for expressing intensity of the emotion for each emotion. The emotion model 73 is adapted to successively update the parameter value for each of these emotions on the basis of specific recognition results of “has been patted,” “has been stroked” or the like to be given by the input semantics converter module 56 respectively, elapsed time, a notice from the action switching module 71, or the like.
  • Concretely, on the basis of the recognition result from the input semantics converter 56 and a degree (predetermined) of work of the action of the pet robot 1 at the time on its emotion, the parameter value of each desire, which the instinct model 74 holds, and a degree (predetermined) of work of the action of the pet robot 1 at the time on its emotion, degrees of restraint and stimulus to be affected by other emotions, elapsed time or the like, the emotion model 73 uses the following equation in a predetermined period:
    E[t+1]=E[t]+k e ×ΔE[t]  (1)
    to calculate a parameter value E[t+1] of the emotion in the next period assuming an amount of fluctuation of the emotion to be calculated by a predetermined operation expression to be ΔE[t], the current parameter value for the emotion to be E[t], a coefficient for expressing a rate (hereinafter, referred to as “sensitivity”) to change the emotion in accordance with the recognition result or the like to be ke.
  • The emotion model 73 replaces this arithmetic result with the current parameter value E[t] of its emotion to thereby update the parameter value of the emotion. In this respect, it has been determined in advance the parameter value of which emotion should be updated in response to each recognition result and notices from the action switching module 71, and when a recognition result of, for example, “has been patted” is given, the parameter value for the emotion of “ANGER” rises, while when a recognition result of “has been stroked” is given, the parameter value for the emotion of “JOY” rises.
  • In contrast, the instinct model 74 holds, concerning four desires of “EXERCISE,” “AFFECTION,” “APPETITE” and “CURIOSITY,” which are independent of one another, a parameter for representing intensity of the desire for each of these desires. The instinct model 74 is adapted to successively update these parameter values for the desires on the basis of recognition results to be given from the input semantics converter module 56, elapsed time, notices from the action switching module 71 or the like respectively.
  • Concretely, concerning the “EXERCISE,” “AFFECTION,” and “CURIOSITY,” assuming an amount of fluctuation of the desire to be calculated by a predetermined operation expression on the basis of the action output, elapsed time, recognition result or the like of the pet robot 1 to be ΔI[k], the current parameter value of the desire to be I[k], and a coefficient for representing the sensitivity of the desire to be ki, the instinct model 74 calculates the parameter value I[k+1] for the desire in the next period through the use of the following equation in a predetermined period:
    I[k+]=I[k]+k i ×ΔI[k]  (2)
    and replaces this arithmetic result with the current parameter value I[k] of the desire to thereby update the parameter value of the desire. In this respect, it has been determined in advance the parameter value of which desire should be changed in response to the action output, recognition result or the like, and if there has been a notice (notice to the effect that the action has been taken) from, for example, the action switching module 71, the parameter value for the “EXERCISE” will become lower.
  • As regards the “APPETITE,” on the basis of the battery residual amount data to be given through the input semantics converter module 56, assuming the battery residual amount to be BL, the instinct model 74 calculates the parameter value I[k] for the “APPETITE” through the use of the following equation:
    I[k]=100−B L   (3)
    in a predetermined period, and replaces this arithmetic result with the current parameter value I[k] for the appetite to thereby update the parameter value for the “APPETITE” concerned.
  • In this respect, in this embodiment, the parameter values for each emotion and each desire are regulated so as to fluctuate within a range of 0 to 100 respectively, and values for the coefficients ke and ki are also individually set for each emotion and for each desire.
  • On the other hand, the output semantics converter module 57 of the middleware layer 40 gives, as shown in FIG. 4, such an abstract action command as “ADVANCE,” “JOY,” “YELP” or “TRACKING (Chase a ball)” to be given by the action switching module 71 of the application layer 41 as described above to the corresponding signal processing modules 58 to 64 of the output system 65.
  • When an action command is given, these signal processing modules 58 to 64 generate a servo command value to be given to a corresponding actuator 25 1 to 25 n (FIG. 2) in order to take the action, audio data of sound to be outputted from a speaker 24 (FIG. 2) and/or driving data to be given to the LEDs serving as “Eyes,” on the basis of the action command to successively transmit these data to the corresponding actuator 25 1 to 25 n, speaker 24 or LEDs through a virtual robot 33 of the robotic server object 32 and the signal processing circuit 14 (FIG. 2) successively.
  • As described above, this pet robot 1 is adapted to be able to autonomously act in response to conditions of the self and surroundings and any instruction and approach from the user in accordance with the control program.
  • (1-3) Growth Model of Pet Robot 1
  • Next, the description will be made of a growth function installed in this pet robot 1. This pet robot 1 has the growth function which continuously changes the action as if it “grew” in response to approach or the like from the user.
  • More specifically, this pet robot 1 is provided with five “growth stages” of “tweety,” “baby period,” “child period,” “young period” and “adult period” as growth processes. The behavioral model library 70 (FIG. 5) in the application layer 41 is provided with behavioral models 70 k(1) to 70 k(5) which are brought into correspondence with the “tweety,” “baby period,” “child period,” “young period” and “adult period” respectively as a behavioral model 70 k as shown in FIG. 9 concerning all conditional items (hereinafter, referred to as “growth-related conditional item”) relating to “growth” such as the “Operation,” “Action” or the like, of each conditional item such as the above described “when the battery residual amount is getting low.” The behavioral model library 71 is adapted to determine the next action through the use of the behavioral model 70 k(1) of the “tweety period” in the initial stage concerning these growth-related conditional items.
  • In this case, each behavioral model 70 k(1) of the “tweety period” has a small number of nodes NODE1 to NODEn (FIG. 7), and the contents of actions to be outputted from these behavioral models 70 k(1) are also actions or operations corresponding to the “tweety period” like “walking in pattern 1 (walking pattern for “tweety period”)” or “making sounds in pattern 1 (bowwow pattern for “tweety period”).”
  • Thus, this pet robot 1 acts, in the initial stage, so that it becomes as “simple” movement as simply “walk,” “stand” and “lie down” concerning, for example, “operation,” and so that it becomes “monotonous” by repeatedly performing the same action concerning “action” in accordance with each behavioral model 70 k(1) in the “tweety.”
  • At this time, the learning module 72 (FIG. 5) in the application layer 41 holds parameters (hereinafter, referred to as “growth parameters”) representing degrees of “growth” therein, and is adapted to successively update the value of growth parameter in accordance with the number of times, elapsed time or the like of approaches (instructions) such as “was petted” or “was patted” from the user on the basis of the recognition result, elapsed time information or the like to be given from the input semantics converter module 56.
  • The learning module 72 evaluates the value of this growth parameter every time the power to the pet robot 1 is turned on, and when the value exceeds a threshold which has been set in advance by bringing it into correspondence with the “baby period,” notifies the behavioral model library 70 of this. When this notice is given, the behavioral model library 70 changes, concerning each growth-related conditional item described above, the behavioral models to be used respectively to the behavioral model 70 k(2) of the “BABY.”
  • At this time, each behavioral model 70 k(2) in the “baby period” has a greater number of nodes NODE0 to NODEn than the behavioral model 70 k(1) in the “tweety,” and the contents of actions to be outputted from these behavioral models 70 k(2) are also higher in degree of difficulty and more complicated (in a growth level) than the actions in the “tweety.”
  • Thus, this pet robot 1 acts, in accordance with these behavioral models 70 k(2) thereafter, so as to become “slightly higher and more complicated” movement by increasing the number of actions, for example, “operation,” and so as to become action “at least with an objective” concerning the “action.”
  • In the same manner as in the above described case thereafter, every time the value of the growth parameter exceeds each threshold set in advance by bringing it into correspondence with the “child period,” “young period” and “adult period” respectively, further the learning module 74 notifies the behavioral model library 71 of this. Every time this notice is given, the behavioral model library 71 successively changes, concerning each growth-related conditional item described above, the behavioral models to be used respectively to the behavioral models 70 k(3) to 70 k(5) of the “child period,” “young period” and “adult period.”
  • At this time, each behavioral model 70 k(3) to 70 k(5) in the “child period,” “young period” and “adult period” has a greater number of nodes NODE0 to NODEn as the “growth stage” rises respectively, and the contents of actions to be outputted from these behavioral models 70 k(3) to 70 k(5) also become higher in degree of difficulty and level of complicatedness of the action as the “growth stage” rises.
  • As a result, this pet robot 1 successively stepwise changes the “operation” from “simple” toward “higher level and more complicated,” and the “action” from “monotonous” toward “action with intention” as the “growth stage” rises (more specifically, changes from “tweety period” to “baby period,” from “baby period” to “child period,” from “child period” to “young period” and from “young period” to “adult period”).
  • As described above, this pet robot 1 is adapted to cause its action and operation to “grow” in five stages: “tweety,” “baby period,” “child period,” “young period” and “adult period” in accordance with instructions to be given by the user and elapsed time.
  • In this respect, in case of this embodiment, the growth model of the pet robot 1 is a model which branches off in and after the “child period” as shown in FIG. 10.
  • More specifically, in case of the pet robot 1, the behavioral model library 70 in the application layer 41 (FIG. 5) is provided with a plurality of behavioral models respectively as behavioral models 70 k(3) to 70 k(5) for the “child period,” “young period” and “adult period” concerning each growth-related conditional item described above.
  • Actually, as the behavioral model 70 k(3) for, for example, the “child period” in each growth-related conditional item, there are prepared a behavioral model (CHILD 1) for causing the pet robot to perform action of a “fidgety” character having careless and quick movement, and a behavioral model (CHILD 2) for causing it to perform action of a “calm” character having smoother and slower movement than the CHILD 1.
  • Also, as the behavioral model 70 k(4) for the “young period,” there are prepared a behavioral model (YOUNG 1) for causing it to perform action of a “rough” character having more careless and quicker movement than the “fidgety” character of the “child period,” a behavioral model (YOUNG 2) for causing it to perform action and operation of an “ordinary” character having slower and smoother movement than the YOUNG 1, and a behavioral model (YOUNG 3) for causing it to perform action of “calm” character having slower operation and less amount of action than the YOUNG 2.
  • Further, as the behavioral model 70 k(5) for the “adult period,” there are prepared a behavioral model (ADULT 1) for causing it to perform action of a very excitable “aggressive” character having more careless and quicker movement than the “rough” character of the “young period,” a behavioral model (ADULT 2) for causing it to perform action of an excitable “fidgety” character having smoother and slower movement than the ADULT 1, a behavioral model (ADULT 3) for causing it to perform action of a “gentle” character having smoother and slower movement and a less amount of action than the ADULT 2, and a behavioral model (ADULT 4) for causing it to perform action of a “quiet” character having further slower movement and a less amount of action than the ADULT 3.
  • When notifying the behavioral model library 70 in order to raise the “growth stage” as described above, the learning module 72 (FIG. 5) in the application layer 41 designates the behavioral model (CHILD 1, CHILD 2, YOUNG 1 to YOUNG 3 and ADULT 1 to ADULT 4) of which “character” should be used as a behavioral model in the next “growth stage” in each growth-related conditional item on the basis of the number of times or the like of “was petted” and “was stroked,” in and after the “child period,” in the “growth stage” thereof.
  • As a result, the behavioral model library 70 changes, on the basis of this designation, the behavioral models 70 k(3) to 70 k(5) to be used in and after the “child period” concerning each growth-related conditional item to behavioral models (CHILD 1, CHILD 2, YOUNG 1 to YOUNG 3 and ADULT 1 to ADULT 4) of the designated “character” respectively.
  • In this case, in and after the “child period,” when shifting to the next “growth stage,” the “character” in the next “growth stage” is determined by the “character” in the present “growth stage,” and therefore, a shift can be only made between the “characters” which are connected by arrows in FIG. 10. Therefore, when the behavioral model (CHILD 1) of the “fidgety” character is used in, for example, the “child period,” the pet robot 1 cannot shift to the behavioral model (YOUNG 3) of the “calm” character in the “young period.”
  • As described above, this pet robot 1 is adapted to change also its “character” along with the “growth” in response to approaches or the like from the user as if a genuine animal formed its character in accordance with how the owner raise or the like.
  • (1-4) Concrete Configuration of Behavioral Model 70 k
  • Next, the description will be made of concrete configuration of the behavioral model 70 k (FIG. 9) of each growth-related conditional item described above.
  • In the case of this pet robot 1, the behavioral model 70 k of each growth-related conditional item has vast state space in which each of all the action patterns which the pet robot 1 is capable of revealing has been stored.
  • Of state space of these behavioral models 70 k, with a state space portion, in which basic actions in this pet robot such as “walking,” “lying down” and “standing” are generated, as a core, these behavioral models 70 k use only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” permit transition to a state space portion to be newly increased (state space portion in which actions and a series of action patterns enabling to be newly revealed will be generated), and sever a state space portion which has been no longer used (state space portion in which actions and a series of action patterns which will not be caused to be revealed will be generated) to thereby generate behavioral models 70 k(1) to 70 k(5) in each “growth stage.”
  • As a method of permitting the transition to the state space portion to be newly increased and for severing any unnecessary state space as described above, this pet robot 1 uses a method of changing the transition probability to the state space in response to the “growth.”
  • For example, in FIG. 11, assuming that a transition condition from a node NODEA to a node NODEB is that “detecting a ball,” and that a series of node groups 90 from the node NODEB is used to perform a series of action patterns to “approach the ball to kick it,” when the ball has been found at the node NODEA, an action pattern PA1 to “chase the ball to kick it” can be revealed at transition probability P1, but if the transition probability P1 is “0,” such an action pattern PA1 will never be revealed.
  • In this pet robot 1, in the case where such an action pattern PA1 is caused to be revealed after a certain “growth stage” is reached, this transition probability P1 is set to “0” in advance in the initial stage and this transition probability P1 is caused to be changed to a predetermined numerical value higher than “0” when the “growth stage” concerned is reached. Conversely, in the case where this action pattern PA1 is caused to be forgotten when a certain “growth stage” is reached, the transition probability from the node NODEA to the node NODEB is caused to be changed to “0” when the “growth stage” is reached.
  • In this pet robot, as the concrete technique for updating the transition probability at a necessary point as described above, each behavioral model of the above described growth-related conditional item is provided with such files (hereinafter, referred to as difference files) 91A to 91D as shown in FIG. 12 by bringing it into correspondence with each “growth stage” of the “baby period,” “child period,” “young period” and “adult period” respectively.
  • In these difference files 91A to 91D, there are contained the node name (number) of a node (which corresponds to the node A in FIG. 11), at which the transition probability should be changed as described above, a place in the state transition table 80 (FIG. 8) of the node, at which the transition probability should be changed, and the transition probability at the place concerned after the change in order to reveal the new action as described above on rising to the. “growth stage.”
  • The behavioral model 70 k of each growth-related conditional item generates action in accordance with the behavioral model 70 k(1) for the “tweety period” in the initial stage, and when a notice to the effect that it has “grown” is given from the learning module 72 (FIG. 5) as described above thereafter, the transition probability at each of respectively designated places is changed to the numerical value designated concerning each node described in the difference files 91 to 91D concerned on the basis of the difference files 91 to 91D for the corresponding “growth stage.
  • For example, in the case of an example shown in FIGS. 8 and 12, when the pet robot has grown to the “baby period,” in the behavioral model 70 k for each growth-related conditional item, the transition probability in the “first column” and the “first row” in the area. (in FIG. 8, a portion which is below the “Output Action” row and on the right of the “Data Range” column) in the state transition table 80, in which the transition probability for the node NODE100 has been described, will be changed to “20”[%], and the transition probability in the “first column” and the “n-th row” in the state transition table concerned will be changed to “30”[%], and so forth respectively. In addition, in the behavioral model 70 k for each growth-related conditional item, concerning other nodes NODE320, NODE720, . . . described in the difference file 91A for this “baby period,” the corresponding transition probability will be similarly changed respectively.
  • In this case, among transition probability, whose numerical value will be changed in this manner, there are included transition probability, in which the transition probability until then has been “0” (that is, a transition to a node, which serves as the starting point of a series of action patterns, has been prohibited) and transition probability, in which the transition probability after the change becomes “0,” (that is, a transition to a node, which serves as the starting point of a series of action patterns, is prohibited). However, the transition probability is changed from “0” to a predetermined numerical value in this manner, or the transition probability after the change becomes “0,” whereby the series of action patterns may become revealed in a new “growth stage,” or the series of action patterns may not be revealed.
  • In this respect, even when the necessary transition probability has been changed as described above, values for each transition probability in each difference file 91A to 91D are selected in such a manner that the sum of each transition probability to be included in the corresponding column in the state transition table 80 after the change amounts, to 100[%].
  • (1-5) Operation and Effect of this Embodiment
  • In the above described configuration, of vast state space in which all the action patterns have been stored, with a state space portion for performing basic action as a core, this pet robot 1 uses only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” severs the state space portion which has been no longer used except for the core, and permits a transition to a state space portion which should be newly increased to thereby generate behavioral models 70 k(1) to 70 k(n) in each “growth stage,” and to act in accordance with the behavioral models 70 k(1) to 70 k(n) concerned thus generated.
  • Therefore, this pet robot 1 is capable of reducing discontinuity in the output action before and after the “growth” to thereby express the “growth” more naturally because the state space of the behavioral models 70 k(1) to 70 k(n) in each “growth stage” continuously changes. In this pet robot 1, since the state space portion for generating basic actions is shared in all “growth stages,” the learning result of the basic action can be successively carried forward to the next “growth stage.”
  • Further, in this pet robot 1, since the state space portion for generating basic action is shared in all “growth stages” as described above, it is easy to prepare behavioral models 70 k(1) to 70 k(n) for each “growth stage,” and it is also possible to curtail the amount of data for the entire behavioral models as compared with a case where individual behavioral models are prepared for each “growth stage” as in the conventional case.
  • Further, this pet robot 1 severs the unnecessary state space for a series of action patterns in accordance with the “growth” as described above, and permits a transition to necessary state space for a series of action patterns, to thereby generate behavioral models 70 k(1) to 70 k(n) in each “growth stage.” Therefore, it is possible to divide each series of action patterns into parts, and to further facilitate an operation for generating the behavioral model 70 k of each growth-related conditional item by that much.
  • According to the above described configuration, of vast state space in which all the action patterns have been stored, with a state space portion for performing basic action as a core, this pet robot 1 uses only a limited portion including the core concerned in the “tweety,” and thereafter, every time it “grows,” severs the state space portion which has been no longer used except for the core, and permits a transition to a state space portion which should be newly increased to thereby generate the behavioral models 70 k(1) to 70 k(n) in each “growth stage,” thereby it is possible to reduce discontinuity in the output action before and after the “growth” by continuously changing the state space for the behavioral models 70 k(1) to 70 k(n) in each “growth stage.” Thus, the “growth” can be expressed more naturally, and it is possible to realize a pet robot capable of improving the entertainment characteristics.
  • (1-6) Other Embodiments
  • In this respect, in the above described first embodiment, the description has been made of the case where the present invention is applied to four-legged walking type pet robots 1 and 100, but the present invention is not limited thereto, but is widely applicable to robots of various embodiments in addition.
  • Also, in the above described first embodiment, the description has been made of the case where the state space of the behavioral model 70 k(1) to 70 k(5) of each “growth stage” is successively expanded along with the “growth,” but the present invention is not limited thereto, but it may be possible to successively reduce the state space of the behavioral model 70 k(1) to 70 k(5) in each “growth stage” or to reduce the state space of the behavioral model 70 k(1) to 70 k(5) in any of the “growth stages” in the course of the expansion.
  • Further, in the above described first embodiment, the description has been made of the case where the pet robot 1 or 100 “grows” in five stages, but the present invention is not limited thereto, but it may be possible to cause the pet robot to “grow” at several stages other than five stages.
  • Further, in the above described first embodiment, the description has been made of the case where the memory means for storing a behavioral model (behavioral model including all action patterns which the pet robot 1 is capable of performing), and action generating means for generating action by the use of partial or full state space of the behavioral model concerned are configured by one behavioral model 70 k and the CPU 10, but the present invention is not limited thereto, but is widely applicable to various configurations in addition.
  • (2) Second Embodiment
  • (2-1) Principle
  • When a transition to a series of action patterns PA1 to be newly acquired by the “growth” is made only from a specific state (node NODEA) as shown in FIG. 11, it is possible to control the revelation of the action pattern only by changing the transition probability P1. When, however, this transition occurs from a plurality of states (nodes NODEA1 to NODEA3) as shown in FIG. 13, it is not easy to control all corresponding transition probability P10 to P12.
  • In such a case, an imaginary node (hereinafter, referred to as virtual node) NODEK can be provided in the behavioral model as shown in FIG. 14, and transition from each node NODEA1 to NODEA3 to a node NODEB, which serves as a starting point of a series of action patterns PA1, can be replaced with transition to the virtual node NODEK in such a manner that the virtual node NODEK is brought into correspondence with the node NODEB, which is the starting point of the above described series of action patterns PA1.
  • Thus, it also becomes easy to control the transition probability, and even when this series of action patterns PA1 is replaced with another series of action patterns PA2 along with “growth,” this can be easily performed only by changing correspondence of a real node to the virtual node NODEK from the node NODEB at the starting point of the previous action pattern PA1 to a node NODEC at the starting point of the next action pattern PA2.
  • (2-2) Configuration of Pet Robot 100 According to this Embodiment
  • Reference numeral 100 in FIG. 1 denotes a pet robot according to the second embodiment as a whole, and is configured in the same manner as a pet robot 1 according to the first embodiment with the exception that it is different in configuration of the behavioral model 70 k (FIG. 9) of each growth-related conditional item along with the “growth.”
  • More specifically, in this pet robot 100, the behavioral model 70 k of each growth-related conditional item is, as shown in FIG. 15, provided with a behavioral model (hereinafter, referred to as basic behavioral model) 101 for generating basic action common to each “growth stage” such as “standing,” “sitting down” and “walking” respectively, and within this basic behavioral model 101, there are provided several virtual nodes NODEK1 to NODEKn.
  • The behavioral model 70 k for each growth-related conditional item is provided with action pattern files 102A to 102E by bringing it into correspondence with each “growth stage” respectively. These action pattern files 102A to 102E are obtained by collecting, into files, as shown in FIG. 16 respectively, each virtual node NODEK1 to NODEKn within the basic behavioral model 101 in its “growth stage,” and state transition tables for node groups for respectively generating each series of action patterns PA1 to PAn, which has been brought into correspondence respectively.
  • Further, the behavioral model 70 k for each growth-related conditional item is, as shown in FIG. 17, provided with files (hereinafter, referred to as difference file 103) which have summarized correspondence tables 103A to 103E representing correspondence relation between each virtual node NODEK1 to NODEKn in each “growth stage” and each real node (node at the starting point of any of action patterns PA1 to PAn stored in the action pattern files 102A to 102E of the “growth stage,” and so forth) The behavioral model 70 k for each growth-related conditional item reads out, in the initial stage, the data of the action pattern file 102A for the “tweety period” to add the data to the basic behavioral model 101, and on the basis of the correspondence table 103A for the “tweety period” stored in the difference file 103, converts each virtual node NODEK1 to NODEKn within the basic behavioral model 101 to a real node to thereby generate the behavioral model 70 k(11) for the “tweety period” for generating action on the basis of the behavioral model 70 k(1) concerned.
  • Thereafter, when a notice to the effect that it has “grown” is given from the learning module 72 (FIG. 5), the behavioral model 70 k for each growth-related conditional item adds the data for the action pattern file 102B for the “baby period” to the basic behavioral model 101 in place of the data for the action pattern file 102A for the “tweety,” and converts each virtual node NODEK1 to NODEKn within the basic behavioral model 101 to a real node on the basis of the correspondence table 103B for the “baby period” stored in the difference file 103 to thereby generate the behavioral model 70 k(2) for the “baby period,” for generating action on the basis of the behavioral model 70 k(2) concerned.
  • Further similarly thereafter, every time a notice to the effect that it has “grown” is given from the learning module 72, the behavioral model 70 k for each growth-related conditional item successively changes the data for action pattern files 102A to 102E to be added to the basic behavioral model 101 to the data for the “child period,” for the “young period” and for the “adult period,” and converts each virtual node NODEK1 to NODEKn within the basic behavioral model 101 into a real node on the basis of the correspondence table 103C to 103E for the “growth stage” stored in the difference file 103 to thereby successively generate the behavioral models 70 k(3) to 70 k(5) for the “child period,” “young period” and “adult period,” for generating action on the basis of the behavioral model 70 k(3) to 70 k(5).
  • As described above, this pet robot 100 is adapted to change its action in response to the “growth” by successively changing the action patterns PA1 to PAn to be brought into correspondence with each virtual node NODEK1 to NODEKn within the basic behavioral model 101 respectively along with the “growth.”
  • (2-3) Operation and Effect of this Embodiment
  • In the above described configuration, this pet robot 100 changes its action in response to the “growth” by successively changing the action patterns PA1 to PAn to be brought into correspondence with each virtual node NODEK1 to NODEKn within the basic behavioral model 101 respectively along with the “growth.”
  • In this pet robot 100, therefore, since the basic behavioral model 101 for generating a basic behavioral model is shared in all the “growth stages,” it is possible to provide consistency in action throughout all the “growth stages,” and the learning result of the basic action can be successively carried forward to the next “growth stage.”
  • In this pet robot 100, since the basic behavioral model 101 is shared in all the “growth stages” as described above, it is easy to prepare the behavioral model, and it is also possible to curtail the amount of data for the entire behavioral models as compared with a case where individual behavioral models are prepared for each “growth stage” as in the conventional case.
  • Further in this pet robot 100, it is possible to divide, into parts, each action patterns PA1 to PAn to be brought into correspondence with each virtual node NODEK1 to NODEKn within the basic behavioral model 101 respectively, and to further facilitate an operation for generating the behavioral model 70 K of each growth-related conditional item by that much.
  • Further in this pet robot 100, since virtual nodes NODEK1 to NODEKn are utilized as described above in addition to similar operation effect to such operation effect to be obtained in the first embodiment, it can be made possible to easily generate the behavioral model 70 K for each growth-related conditional item even when shifts to a certain series of action patterns PA1 may occur in nodes NODEA1 to NODEA3 here and there as shown in, for example, FIG. 13.
  • According to the above described configuration, the basic behavioral model 101 is provided with virtual nodes NODEK1 to NODEKn therein, each behavioral model 70 k(1) for the “tweety period” is generated in such a manner that (node at the starting point of) each series of action patterns PA1 to PAn for the “tweety period” is brought into correspondence with each virtual node NODEK1 to NODEKn respectively, and thereafter, each behavioral model 70 k(2) to 70 k(5) for the “baby period,” for the “child period,” for the “young period” and for the “adult period” is generated in such a manner that each series of action patterns PA1 to PAn to be brought into correspondence with each virtual node NODEK1 to NODEKn respectively is replaced with action patterns for the “baby period,” for the child period,” for the “young period” and for the “adult period” along with the “growth.” Therefore, it is possible to provide consistency in action throughout all the “growth stages.” Thus, it is possible to express the “growth” more naturally, and to realize a pet robot capable of improving the entertainment characteristics.
  • (3) Other Embodiments
  • In this respect, in the above described second embodiment, the description has been made of the case where the present invention is applied to four-legged walking type pet robots 1 and 100, but the present invention is not limited thereto, but is widely applicable to robots of various embodiments in addition.
  • Also, in the above described second embodiment, the description has been made of the case where the state space of the behavioral model 70 k(1) to 70 k(5) of each “growth stage” is successively expanded along with the “growth,” but the present invention is not limited thereto, but it may be possible to successively reduce the state space of the behavioral model 70 k(1) to 70 k(5) in each “growth stage” or to reduce the state space of the behavioral model 70 k(1) to 70 k(5) in any of the “growth stages” in the course of the expansion.
  • Further, in the above described second embodiment, the description has been made of the case where the pet robot 1 or 100 “grows” in five stages, but the present invention is not limited thereto, but it may be possible to cause the pet robot to “grow” at several stages other than five stages.
  • Further, in the above described second embodiment, the description has been made of the case where change means for changing a node group to be allocated to the virtual node NODEK1 to NODEKn is configured by one behavioral model 70 k and the CPU 10, but the present invention is not limited thereto, but is widely applicable to various configurations in addition.
  • (3) Third Embodiment
  • (3-1) Configuration of Pet Robot According to Third Embodiment
  • In FIG. 18, reference numeral 110 generally denotes a pet robot according to a third embodiment. Leg units 112A to 112D are respectively connected to the front right, front left, rear right, and rear left parts of a body unit 111 and a head unit 113 and a tail unit 114 are respectively connected to the front end part and the rear end part of the body unit 111.
  • As shown in FIG. 19, in the body unit 111 a control unit 126, a CPU (Central Processing Unit) 120, a DRAM (Dynamic Random Access Memory) 121, a flash ROM (Read Only Memory) 122, a PC (Personal Computer) card interface circuit 123 and a signal processing circuit 124 are connected to each other with an internal bus 125 and a battery 127 serving as the power source of the pet robot 110 are contained. Further, an angular velocity sensor 128 and an acceleration sensor 129 for detecting the orientation or the acceleration of the movement of the pet robot 110 or the like are also accommodated in the body unit 111.
  • Further, disposed at predetermined positions in the head unit 113 are a CCD (Charge Coupled Device) camera 130 for picking up the image of an external condition, a touch sensor 131 for detecting applied pressure due to physical actions from a user such as “petting” or “patting”, a distance sensor 132 for measuring a distance to a front object, a microphone 133 for collecting external sounds, a speaker 134 for outputting sounds such as bark and LEDs (Light Emitting Diode) (not shown) corresponding to the “eyes” of the pet robot 110.
  • Further, actuators 135 1 to 135 n for the number of degrees of freedom and potentiometers 136 1 to 136 n are respectively disposed in the joint parts of the leg units 112A to 112D, the connecting parts between the leg unit 112A to 112D and the body unit 111, the connecting part between the head unit 113 and the body unit 111 and the connecting part between the tail 114A of the tail unit 114 or the like.
  • Various kinds of sensors, the LED and the actuators 135 1 to 135 n such as the angular velocity sensor 128, the acceleration sensor 129, the touch sensor 131, the distance sensor 132, the microphone 133, the speaker 134 and the potentiometers 136 1 to 136 n are connected to the signal processing circuit 124 of the control unit 126 with corresponding hubs 27 1 to 27 N. The CCD camera 130 and the battery 127 are directly connected to the signal processing circuit 124.
  • At this time, the signal processing circuit 124 sequentially fetches sensor data, image data and audio data supplied from the above described sensors and sequentially stores them in prescribed positions in the DRAM 121 through an internal bus 125. Further, the signal processing circuit 124 sequentially fetches battery residual amount data indicating battery residual amount supplied from the battery 127 as well as them and stores it in a prescribed position in the DRAM 121.
  • The sensor data, the image data, the audio data and the battery residual amount data stored in the DRAM 121 in such a way are used when the CPU 120 controls the operation of the pet robot 110 after that.
  • In practice, during an initial time when the power source of the pet robot 110 is turned on, the CPU 120 reads out a control program stored in a memory card 138 inserted in the PC card slot (not shown) of the body unit 111 or in the flash ROM 122 directly or through the PC card interface circuit 123 and stores it in the DRAM 121.
  • Further, the CPU 120 then judges its own and circumferential states or whether or not there exist an instruction and an action from a user on the basis of the sensor data, the image data, the audio data and the battery residual amount data which are sequentially stored in the DRAM 121 from the signal processing circuit 124 as mentioned above.
  • Still further, the CPU 120 determines a subsequent conduct on the basis of the judged result and the control program stored in the DRAM 121 and drives desired actuators 135 1 to 135 n on the basis of the determination result, so that the CPU 120 permits the pet robot to act, such as to move the head unit 113 upward and downward and rightward and leftward, to wag the tail 114A of the tail unit 114 and to drive the respective leg units 112A to 112D to walk.
  • At this time, the CPU 120 generates the audio data as required and supplies the audio data to the speaker 134 as an audio signal through the signal processing circuit 124 to output a voice based on the audio signal to an external part, and turn on or off or flickers the above described LEDs.
  • As described above, the pet robot 110 is designed to autonomously conduct in accordance with its own and circumferential states and the instruction and action from the user.
  • (3-2) Software Configuration of Control Program
  • The software configuration of the above described control program in the pet robot 110 is shown in FIG. 20. In FIG. 20, a device driver layer 140 is located in the lowermost layer of the control program and composed of a device driver set 141 having a plurality of device drivers. In this case, each device driver is an object which is allowed to directly access a hardware employed in an ordinary computer such as the CCD camera 130 (FIG. 19) or a timer and executes a processing by receiving an interruption from a corresponding hardware.
  • Further, a robotic server object 142 is located in a layer higher than the device driver layer 140. The robotic server object 142 comprises a virtual robot 143 composed of software groups for providing interfaces to have access to, for instance, the hardware such as the above described various types of sensors or the actuators 135 1 to 135 n, etc., a power manager 144 composed of software groups for managing the switching of the power source or the like, a device driver manager 145 composed of software groups for managing various kinds of other device drivers and a designed robot 146 composed of software groups for managing the mechanism of the pet robot 110.
  • A manager object 147 comprises an object manager 148 and a service manager 149. In this case, the object manager 148 comprises software groups for managing the start and end of each of software groups included in the robotic server object 142, a middleware layer 150 and an application layer 151. The service manager 149 comprises software groups for managing the connection of the objects on the basis of connection information between the objects written in a connection file stored in the memory card 138 (FIG. 19).
  • The middleware layer 150 is located in a layer higher than the robotic server object 142 and is composed of software groups for providing the basic functions of the pet robot 110 such as image processing or audio processing. The application layer 151 is located in a layer higher than the middleware layer 150 and is composed of software groups for determining the action of the pet robot 110 on the basis of the result processed by the software groups constituting the middleware layer 150.
  • The specific software configurations of the middleware layer 150 and the application layer 151 are respectively shown in FIGS. 21 and 22.
  • As apparent from FIG. 21, the middleware layer 150 comprises a recognition system 167 including signal processing modules 160 to 165 for recognizing sound scales, for detecting a distance, for detecting a posture, for a touch sensor, for detecting a movement and for recognizing a color and an input semantics converter module 166 and an output system 175 including an output semantics converter module 167 and signal processing modules 168 to 174 for managing a posture, for tracking, for reproducing a motion, for walking, for standing up after falling down, for lighting the LEDs and reproducing sounds and the like.
  • In this case, the respective signal processing modules 160 to 165 of the recognition system 167 fetch the corresponding data from among the sensor data, the image data and the audio data read from the DRAM 121 (FIG. 19) by the virtual robot 143 of the robotic server object 142, execute prescribed processing on the basis of the fetched data and supply the processed results to the input semantics converter module 166.
  • The input semantics converter module 166 recognizes its own and circumferential states or the instructions and the actions from the user such as “detected a ball”, “detected a falling-down”, “was touched”, “was patted”, “heard the scales of do, mi, so”, “detected a moving object” or “detected an obstacle”, on the basis of the processed results supplied from the signal processing modules 160 to 165 and outputs the recognition results to the application layer 151 (FIG. 19).
  • As shown in FIG. 22, the application layer 151 comprises five modules including a behavioral model library 180, an action switching module 181, a learning module 182, an emotion model 183 and an instinct model 184.
  • In this case, in the behavioral model library 180, as shown in FIG. 23, respectively independent behavioral models 180 1 to 180 n are provided so as to correspond to previously selected several condition items such as “when a battery residual amount is getting low”, “when standing up after falling-down”, “when avoiding an obstacle”, “when expressing an emotion”, “when detecting a ball” or the like.
  • Then, when the recognition results are respectively supplied from the input semantics converter module 166 or when a prescribed time passes after the last recognition result is supplied or the like, these behavioral models 180 1 to 180 n respectively determine subsequent actions by referring to the parameter values of corresponding emotional behaviors held in the emotion model 183 or the parameter values of corresponding desires held in the instinct model 184 if necessary as mentioned later and output the determination results to the action switching module 181.
  • In this embodiment, the respective behavioral models 180 1 to 180 n uses, as methods for determining subsequent actions, an algorithm called a probability automaton for probably determining to which node NODE0′ to NODEn′ the transition is made from one node (state) of NODE0′ to NODEn′, on the basis of a transition probability P1′ to Pn′ respectively set to arcs ARC1 to ARCn1 for connecting the NODE0′ to NODEn′ together.
  • Specifically, each behavioral model 180 1 to 180 n has a state transition table 190 as shown in FIG. 25 for each of these nodes NODE0′ to NODEn′, so as to correspond to each of the nodes NODE0′ to NODEn′ which respectively form their own behavioral models 180 1 to 180 n.
  • In this state transition table 190, input events (recognition results) which are regarded as transition conditions in the nodes NODE0′ to NODEn′ are listed in order of priority in the row of “input event name” and further conditions of the transition conditions are described in corresponding columns in the rows of “data name” and “range of data”.
  • Thus, in the node NODE100′ shown in the state transition table 190 in FIG. 25, in the case where the recognition result of “detected a ball” is obtained, the “size (SIZE)” of the ball which is given together with the recognition result needs to be located within a range of “0 to 1000”, as a condition to shift to other node, or in the case where the recognition result of “detected an obstacle (OBSTACLE)” is obtained, a “distance (DISTANCE)” to the obstacle which is given together with the recognition result needs to be located within a range of “0 to 100”, as a condition to shift to other node.
  • Further, in the node NODE100′, even in the case where there is no input of recognition result, the node can shift to another node when the parameter value of any one of “joy (JOY)”, “surprise (SURPRISE)” or “sadness (SADNESS)” of the parameter values of respective emotional behaviors and respective desires held in the emotion model 183 and the instinct model 184 to which the behavioral models 180 1 to 180 n periodically refer is within a range of “50 to 100”.
  • Further, in the state transition table 190, the names of nodes which can shift from the nodes NODE0′ to NODEn′ are enumerated in the row of “transition destination node” in the column of “transition probability to other nodes”. Further, the transition probability to each of other nodes NODE0′ to NODEn′ to which the node can shift when all conditions described in the rows of “input event name”, “data name” and “range of data” are completed, is respectively described in a corresponding position in the column of “transition probability to other nodes”. Actions to be outputted upon shift to the nodes NODE0 to NODEn are described in the row of “output action” in the column of “transition probability to other nodes”. In this connection, the sum of the probability in respective rows in the column of “transition probability to another nodes” is 100 [%].
  • Therefore, in the node NODE100 seen in the state transition table 190 shown in FIG. 25, for instance, in the case where the recognition result of “detected a ball (BALL)” in which the “SIZE (size)” of the ball is located within a range of “0 to 1000” is given, the node NODE100′ can shift to a “node NODE120′” with the probability of “30 [%]” and the action of “ACTION 1” is outputted at that time.
  • The behavioral models 180 1 to 180 n are configured so that the nodes NODE0′ to NODEn′ respectively described in such a state transition table 190 are linked together. When recognition results are supplied thereto from the input semantics converter module 166, the action modules 180 1 to 180 n stochastically determine next actions by using the state transition table 190 of the corresponding nodes NODE0′ to NODEn′ and output the determination results to the action switching module 181.
  • The action switching module 181 selects the action outputted from the behavioral models 180 1 to 180 n high in predetermined priority from among the actions outputted from the behavioral models 180 1 to 180 n of the behavioral model library 180 and transmits a command for executing the action (called it an action command, hereinafter) to the output semantics converter 167 of the middleware layer 150. In the present embodiment, the behavioral models 180 1 to 180 n described in the lower side in FIG. 23 are set to be higher in priority.
  • The action switching module 181 informs the learning module 182, the emotion model 183 and the instinct model 184 of the completion of the action on the basis of action completion information supplied from the output semantics converter 167 after the completion of the action.
  • On the other hand, the learning module 182 inputs the recognition results of teaching received as actions from the user such as “hit” or “patted” of the recognition results supplied form the input semantics converter 56.
  • Then, the learning module 182 changes the corresponding transition probability to the behavioral models 180 1 to 180 n so as to lower the appearance probability of an action upon “hit (scolded)”, and raise the appearance probability of an action upon “patted (praised)” on the basis of the recognition result and information from the action switching module 181.
  • The emotion model 183 holds a parameter indicating the intensity of an emotional behavior for each emotional behavior of the total of six emotional behaviors including “joy”, “sadness”, “anger”, “surprise”, “disgust” and “fear”. The emotion module 183 is adapted to sequentially update the parameter values of these emotional behaviors on the basis of the specific recognition results such as “hit” and “patted”, etc. respectively supplied from the input semantics converter module 166, the lapse of time and information from the action switching module 181 or the like.
  • Specifically, the emotion model 183 calculates the parameter value E[t+1] of an emotional behavior during a next cycle by using a following equation (4) for a prescribed cycle, assuming that the quantity of fluctuation of an emotional behavior calculated by a prescribed operation expression on the basis of the recognition result from the input semantics converter 56 and a degree (preset) of the action of the pet robot 110 at that time which acts on the emotional behavior, the parameter value of each desire held by the instinct model 184 and the degree (preset) of the action of the pet robot 110 at that time which acts on the emotional behavior, a degree of suppression and stimulation receiving from other emotional behavior, the lapse of time or the like is ΔE[t]′, the present parameter value of the emotional behavior is E[t]′, and a coefficient expressing a rate (called it a sensitivity, hereinafter) of changing the emotional behavior depending on the recognition result, etc. is ke′.
    E [t+1]′=E [t]′+k e ′×ΔE [t]′  (4)
  • Then, the emotion model 183 replaces the calculated result by the present parameter value E[t]′ of the emotional behavior to update the parameter value of the emotional behavior. It is previously determined to update which emotional behavior of all emotional behaviors in respect of its parameter value relative to each of the recognition results and the information from the action switching module 181. For instance, in the case where the recognition result of “hit” is supplied, the parameter value of the emotional behavior of “anger” is raised. In the case where the recognition result of “patted” is supplied, the parameter value of the emotional behavior of “joy” is increased.
  • As compared therewith, the instinct model 184 holds a parameter indicating the intensity of a desire for each desire of mutually independent four desires including an “exercise desire”, an “affection desire”, an “appetite desire” and a “curiosity desire”. Then, the instinct model 184 is adapted to sequentially update the parameter values of these desires respectively on the basis of the recognition results supplied from the input semantics converter module 166, the lapse of time and the notification from the action switching module 181 or the like.
  • Specifically, the instinct model 184 calculates the parameter value I[k+1]′ of a desire during a next cycle by using a following equation for a prescribed cycle, assuming that the quantity of fluctuation of the desire calculated by a prescribed operation expression on the basis of the action output of the pet robot 110, the lapse of time and the recognition results, etc. is ΔI[k]′ in respect of the “exercise desire”, the “affection desire” and the “curiosity”, the present parameter value of the desire is I[k]′, and a coefficient indicating the sensitivity of the desire is ki′.
    I [K+]′=I [k]′+k i ′×ΔI [k]′  (5)
  • The calculated result is replaced by the present parameter value I[k]′ to update the parameter value of the desire. It is previously determined which desire is changed in respect of its parameter value relative to the action output or the recognition results, etc. For instance, when there is a notification (information of an action) sent from the action switching module 181, the parameter value of the “exercise desire” decreases.
  • Further, as for the “appetite desire”, the instinct model 184 calculates the parameter value I[k]′ of the “appetite desire” in accordance with a following equation for a prescribed cycle, assuming that a battery residual amount is BL′ on the basis of the battery residual amount data supplied through the input semantics converter module 166.
    I [k]′=100−B L   (6)
  • The calculated result is replaced by the present parameter value I[k]′ of the appetite desire to update the parameter value of the “appetite desire”.
  • In the present embodiment, the parameter values of the emotional behaviors and desires are regulated so as to respectively vary within a range of 0 to 100. Further, the values of coefficients ke′ and ki′ are individually set for each emotional behavior and for each desire.
  • The output semantics converter module 167 of the middleware layer 150 supplies abstract action commands such as “advancement”, “joy”, “cry” or “tracking (chase a ball)” supplied from the action switching module 181 of the application layer 151 as mentioned above to the corresponding signal processing modules 168 to 174 of the output system 175, as shown in FIG. 21.
  • Then, when the action commands are supplied to the signal processing modules 168 to 174, these signal processing modules 168 to 174 generate servo command values to be supplied to the actuators 135 1 to 135 n (FIG. 19) for carrying out the actions, the audio data of sound outputted from the speaker 134 (FIG. 19) or driving data supplied to the LED of the “eye” on the basis of the action commands and sequentially transmit these data to the corresponding actuators 135 1 to 135 n, the speaker 134 or the LEDs through the virtual robot 143 of the robotic server object 142 and the signal processing circuit 124 (FIG. 19).
  • In such a way, the pet robot 110 is designed to carry out the autonomous actions in accordance with its own and circumferential states and the instructions and the actions from the user on the basis of a control program.
  • (3-3) Growth Model of Pet Robot
  • (3-3-1) Growth of Action
  • Now, a growth function mounted on the pet robot 110 will be described below. In the pet robot 110, is mounted the growth function for changing its action in accordance with the actions from the user as if a real animal grew.
  • Specifically, in the pet robot 110, five “growth stages” including a “tweety period”, a “baby period”, a “child”, a “young” and an “adult period” are provided as growth processes. In the behavioral model library 180 (FIG. 22) of the application layer 151, behavioral models 180 k(1) to 180 k(5) are respectively provided corresponding to the “tweety period”, “baby period”, “child”, the “young” and the “adult period.” as behavioral models 180 k as shown in FIG. 26, in respect of all condition items (called them condition items related to growth, hereinafter) related to four items of a “walking state”, a “motion”, an “action” and a “sound (bark)” of the respective condition items such as the above described “when the battery residual amount gets low”. Then, concerning these condition items related to growth in the behavioral model library 181, a next action is determined by employing the behavioral model 180 k(1) of the “tweety period” during an initial time.
  • In this case, each behavioral model 180 k(1) in the “tweety period” has the small number of nodes NODE0′ to NODEn′ (FIG. 24). Further, the contents of the action outputted from these behavioral models 180 k(1) are equivalent to actions or the contents of actions which correspond to those of the “tweety period” such as “move forward in accordance with a pattern 1 (walking pattern for the “tweety period” or “cry in accordance with a pattern 1 (crying pattern for the “tweety period”).
  • Thus, the pet robot 110 acts and operates in accordance with each behavioral model 180 k(1) of the “tweety period” during the initial time, for instance, “walks totteringly” with small steps in “a walking state”, makes a “simple movement” such as simply “walking”, “standing”, “sleeping” as for “motion”, makes a “monotonous” action by repeating the same actions as for “action” and generates a “small and short” cry as for “sound”
  • Further, at this time, the learning module 182 (FIG. 22) of the application layer 151 holds a parameter (called it a growth parameter, hereinafter) indicating the degree of “growth” therein and sequentially updates the value of the growth parameter in accordance with the number of times of actions (teaching) from the user such as “patted” or “hit” or the lapse of time on the basis of the recognition results or the lapse of time information, etc. supplied from the input semantics converter module 166.
  • Then, the learning module 182 evaluates the value of the growth parameter every time the power source of the pet robot 110 is turned on. When the above described value exceeds a preset threshold value so as to correspond to the “baby period”, the learning module 182 informs the behavioral model library 180 of it. When being given this information, the behavioral model library 180 changes the behavioral models used respectively for the above described condition items related to growth to the behavioral models 180 k(2) of the “baby period”.
  • At this time, each behavioral model 180 k(2) of the “baby period” has the number of nodes NODE0′ to NODEn′ larger than those of the behavioral model 180 k(1) in the “tweety period”. Further, the contents of the actions outputted from these behavioral models 180 k(2) are more difficult and more complicated in a level (growth level) than those of the actions in the “baby period”.
  • Thus, the pet robot 110 then acts and operates in accordance with these behavioral models 180 k(2), so as to “slightly securely walk” by increasing the rotating speed of the respective actuators 135 1 to 135 n (FIG. 19), for instance, as for the “walking state”, to make “a little higher and more complicated” movement by increasing the number of actions, as for the “motion”, to have “a little purposeful” conduct as for the “action” and to have “a little long and loud sounds” as for the “sound”.
  • Further, every time the value of growth parameter exceeds threshold values respectively preset so as to correspond to the “child”, the “young” and the “adult period”, the learning module 184 informs the behavioral model library 181 of it in the similar way to the above described case. Further, the behavioral model library 181 sequentially changes the behavioral models to the behavioral models 180 k(3) to 180 k(5) in the “child”, the “young” and the “adult period”, respectively as for the above described condition items related to growth every time this information is supplied thereto.
  • At this time, as the “growth stage” is raised, the number of nodes NODE0′ to NODEn′ of each of the behavioral models 180 k(3) to 180 k(5) of the “child”, the “young” and the “adult period” increases. Further, as the “growth stage” is raised, the contents of the actions outputted from these behavioral models 180 k(3) to 180 k(5) are more difficult and more complicated in the level.
  • As a result, as the “growth stage” in the pet robot 110 rises (in other words, when the “tweety period” changes to the “baby period”, the “baby period” changes to the “child”, the “child” changes to the “young” and the “young” changes to the “adult period”), the “walking state” changes from “walking totteringly” to “walking securely”, the “motion” changes from a “simple” motion to a “high level and complicated” motion, the “action” changes from the “monotonous action” to the “purposeful action” and the “sound” changes from the “small and short sound” to the “long and large sound”, sequentially stepwise.
  • As described above, in the pet robot 110, the action and operation are designed to “grow” in the course of five stages of the “tweety period”, the “baby period”, the “child”, the “young” and the “adult period” in accordance with teaching given from the user and the lapse of time.
  • In the present embodiment, the growth model of the pet robot 110 is a model which branches after the “young” as shown in FIG. 27.
  • Specifically, in case of the pet robot 110, in the behavioral model library 180 of the application layer 151 (FIG. 22), a plurality of behavioral models are prepared as the behavioral models 180 k(3) to 180 k(5) of the “child period”, the “young period” and the “adult period” respectively for the above described condition items related to growth.
  • In practice, as the behavioral models of, for instance, the “child period” of the condition items related to growth, are prepared a behavioral model (CHILD 1′) for performing an action of a “wild” character whose movement is rough and rapid and a behavioral model (CHILD 2′) for performing an action of a “gentle” character whose movement is smoother and slower than the former.
  • Further, as the behavioral models of the “young period”, are prepared an action mode (YOUNG 1′) for performing an action of a “rough” character whose movement is rougher and faster than that of the “wild” character of the “child period”, a behavioral model (YOUNG 2′) for performing an action and operation of an “ordinary” character whose movement is slower and smoother than that of the YOUNG 1′, and a behavioral model (YOUNG 3′) for performing an action of a “gentle” character whose movement is much slower and whose amount of movement is lower than that of the YOUNG 2′.
  • Still further, as behavioral models of the “adult period”, are prepared a behavioral model (ADULT 1′) for performing an action of a very irritable and “aggressive” character whose movement is rougher and faster than that of the “rough” character of the “young period”, a behavioral model (ADULT 2′) for performing an action of an irritable and “wild” character whose movement is smoother and slower than that of the ADULT 1′, a behavioral model (ADULT 3′) for performing an action of a “gentle” character whose movement is smoother and slower and whose amount of movement is lower than that of the ADULT 2′ and a behavioral model (ADULT4) for performing an action of a “quiet” character whose movement is much slower and whose amount of movement is lower than that of the ADULT 3′.
  • Then, when the learning module 182 (FIG. 22) of the application layer 151 supplies information for raising the “growth stage” to the behavioral model library 180, as described above, the learning module 182 designates which character is used from among the characters, and which behavioral model is used from among the behavioral models CHILD 1′, CHILD 2′, YOUNG 1′ to YOUNG 3′ and ADULT 1′ to ADULT 4′ as the behavioral model of a next “growth stage” of the condition items related to growth on the basis of the number of times of “hit” and “patted”, etc. in the “growth stage” after the “child period”.
  • As a result, the behavioral model library 180 changes the behavioral models used after the “child period” to the behavioral models of the designated “characters” respectively for the condition items related to growth on the basis of the designation.
  • In this case, after the “child period”, the “character” in a next “growth stage” is determined depending on the “character” in the present “growth stage” upon shift to the next “growth stage”. In FIG. 27, a character can be changed only to another character of the “characters” connected together by arrow marks. Therefore, for example, in the case where the behavioral model (CHILD 1′) of a “wild” character is employed in the “child period”, it cannot be changed to the behavioral model (YOUNG 3′) of a “gentle” character in the “young period”.
  • As described above, the pet robot 110 is designed to change its “character” with its “growth” in accordance with the actions from the user or the like as if the character of a real animal were formed by a breeding method of an owner.
  • (3-3-2) Growth of Emotion and Instinct
  • In addition to the above configuration, in the case of the pet robot 110, the emotion and instinct are designed to “grow” as the above described actions “grow”.
  • More specifically, in case of the pet robot 110, in the emotion model 183 (FIG. 22) of the application layer 151, are stored files (called them emotion parameter files, hereinafter) 200A to 200E in which the values of coefficient ke′ of the equation (1) relative to each emotional behavior for each of the “growth stages” as shown in FIGS. 29(A) to 29(E) are respectively described.
  • The emotion model 183 is designed to cyclically update the parameter values of the emotional behaviors respectively on the basis of the equation (4) by using the values of coefficient ke′ described in the emotion parameter file 200A for the “tweety period” during the initial time (namely, the “growth stage” is a stage of the “tweety period”.
  • Further, every time the “growth stage” rises, the learning module 182 supplies information for informing the rise of the stage to the emotion model 183 as in the case of the behavioral model library 180 (FIG. 22). Then, the emotion model 183 respectively updates the values of coefficient ke of the equation (4) for the respective emotional behaviors to corresponding values described in the emotion parameter files 200B to 200E of the corresponding “growth stages” every time this information is supplied.
  • At this time, for example, in the “tweety period”, all the values of coefficient ke′ of other emotional behaviors than a “joy” and a “fear” are set to “0”, as apparent from FIG. 29(A). Accordingly, in the “tweety period”, it is only the emotional behaviors of “joy” and “fear” that the values of parameters of the emotional behaviors cyclically updated change. Since the parameter values of other emotional behaviors are always constant, the emotional behaviors except “joy” and “fear” are suppressed. Consequently, in the “tweety period”, only the “joy” and the “fear” of six emotional behaviors can be expressed as the actions (in other words, only the respective parameter values of “joy” and “fear” of six emotional behaviors are employed for generating the actions).
  • Further, as shown in FIG. 29(B), in the “baby period”, all the values of coefficient ke′ of the emotional behaviors other than “joy”, “fear” and “anger” are set to “0”. Accordingly, in the “baby period”, it is limited to the emotional behaviors of “joy”, “fear” and “anger” that the values of parameters of the respective emotional behaviors cyclically updated change. Since the parameter values of other emotional behaviors are always constant, the emotional behaviors except the “joy”, the “fear” and the “anger” are suppressed. As a result, in the “baby period”, the “joy”, the “fear” and the “anger” of the six emotional behaviors can be expressed as the actions.
  • Similarly, as shown in FIGS. 29(C) to 29(E), only the emotional behaviors of “joy”, “fear”, “anger” and “surprise” can be expressed as the actions in the “child period” and all the emotional behaviors can be expressed as the actions in the “young period” and the “adult period”.
  • As described above, in the pet robot 110, as the “growth stage” rises, the number of emotional behaviors which can be expressed as actions (the number of emotional behaviors employed for generating actions) increases. Thus, with the “growth” of the action, the emotion can “grow”.
  • Similarly to the above, in the instinct model 184 (FIG. 22) of the application layer 151, are stored files (called them instinct parameter files, hereinafter) 201A to 201E in which the values of coefficient ki′ of the equation (5) relative to each desire for each of the “growth stages” as shown in FIGS. 30(A) to 30(E) are described.
  • The instinct model 184 is adapted to cyclically update the parameter values of the respective desires on the basis of the equation (5) by employing the values of coefficient ki′ described in the instinct parameter file 201A in the “tweety period”, during an initial time (that is to say, the “growth stage” is a stage of the “tweety period”).
  • Further, every time the “growth stage” rises, a notification for informing the instinct model 184 of this is supplied to the instinct model 182 from the learning module 182 (FIG. 22), as in the case of the emotion model 183. Then, the instinct model 184 is adapted to update the values of coefficient ki′ in the equation of (5) for each desire to corresponding values described in the instinct parameter files 201B to 201E of the corresponding “growth stages” every time the notification is supplied thereto.
  • At this time, in the “tweety period”, as apparent from FIG. 30(A), all the values of coefficient ki′ of other desires than “appetite” are set to “0”. Accordingly, in the “tweety period”, it is limited only to the “appetite” that the values of parameter of the desires which are cyclically updated change. Since the parameter values of other desires are always constant, the desires except the “appetite” are suppressed. Consequently, in the “tweety period”, only the “appetite” of four desires can be expressed as an action (in other words, only the parameter value of the “appetite” of the four desires is employed for generating an action).
  • Further, in the “baby period”, as apparent from FIG. 30(B), all the values of coefficient ki′ of desires except the “appetite” and an “affection desire” are set to “0”. Therefore, in the “baby period”, it is limited only to the “appetite” and the “affection desire” that the values of parameters of the desires cyclically updated change, and the parameters of other desires are always constant values. As a result, in the “baby period”, desires other than the “appetite” and the “affection desire” are suppressed and only the “appetite” and the “affection desire” of the four desires can be expressed as the actions.
  • Similarly, as shown in FIGS. 30(C) to 30(E), in the “child period”, desires of the “appetite”, “the affection desire” and “curiosity” can be represented as actions. In the “young period” and the “adult period”, all the desires can be expressed as actions.
  • As described above, in the pet robot 110, as the “growth stage” rises, the number of desires (the number of desires used for generating actions) which can be expressed as the actions and operations increases, as shown in FIG. 30 (B). Thus, with the “growth” of the action, the instinct is allowed to “grow”.
  • (3-4) Operation and Effects of First Embodiment
  • The pet robot 110 having the above described configuration can express only a part of the emotional behaviors and desires of six emotional behaviors and four desires as the actions during an initial time. Then, as the “growth stage” rises, the number of emotional behaviors and desires which can be expressed as the actions increases.
  • Therefore, since the emotion and instinct of the pet robot 110 also “grow” with the “growth” of the actions, the “growth” can be more biologically and naturally expressed, and the user can enjoy the processes thereof.
  • Further, since the emotion and instinct models of the pet robot 110 first start from two simple emotional behaviors and one desire, the user can grasp the actions of the pet robot 110 with ease. Then, when the user is accustomed to the emotion and instinct models, the emotion and instinct models become complicated little by little, so that the user can readily understand or meet the emotion and instinct models in each “growth stage”.
  • According to the above described configuration, since only a part of the emotional behaviors and desires of the six emotional behaviors and the four desires can be expressed as the actions during an initial time, and then, as the “growth stage” rises, the number of emotional behaviors and desires which can be expressed as the actions is increased, the “growth” can be more biologically and naturally expressed, so that the pet robot capable of improving an entertainment characteristic can be realized.
  • (3-5) Other Embodiments
  • In the above described third embodiment, although there is described a case in which the present invention is applied to a four-footed walking type pet robot, it should be noted that the present invention is not limited thereto, and can be widely applied to, for instance, a two-footed robot or other various kinds of robot apparatuses.
  • Further, in the above described third embodiment, although there is described a case to which the emotion model 183 and the instinct model 184 are applied as restricting means for restricting the number of emotional behaviors or desires used for generating the action so as to increase stepwise, needless to say, the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise in, for instance, the behavioral model library 180 or the learning module 182.
  • Still further, in the above described third embodiment, although there is described a case in which the number of emotional behaviors or desires used for generating the action is restricted so as to increase stepwise on the basis of the externally applied prescribed stimulation (for instance, “hit” or “patted”) and the lapse of time, it should be noted that the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise, in addition thereto or in place thereof, on the basis of other conditions (for example, a desired action can be successfully done) except the above.
  • Still further, in the above described third embodiment, although there is a described a case in which the number of emotional behaviors and desires is increased in the course of five stages including the “tweety period”, the “baby period”, the “child period”, the “young period”, and the “adult period” so as to meet the “growth” of the action, it should be recognized that the present invention is not limited thereto, and a “growth stage” except them may be provided.
  • Still further, in the above described third embodiment, although there is described a case in which a plurality of behavioral models 180 1 to 180 n are provided in the action library 180 as the action generating means for generating the action on the basis of the respective parameter values of the emotional behaviors and desires and the behavioral models 180 1 to 180 n, it should be recognized that the present invention is not limited thereto, and the action may be generated on the basis of one behavioral model and various kinds of other configurations may be broadly applied to the configuration of the action generating means.
  • Still further, in the above described third embodiment, although there is described a case in which the behavioral model changing means for changing the behavioral models 180 k(1) to 180 k(5) (FIG. 26) to the behavioral models 180 k(2) to 180 k(5) high in growth level on the basis of the accumulation of prescribed stimulation (for instance, “hit” or “pat”) and the lapse of time comprises the learning module 182 and the behavioral model library 180, needless to say, the present invention is not limited thereto and various kinds of other configurations can be widely applied.
  • Still further, in the above described third embodiment, although there is described a case in which, as the emotional behaviors, six emotional behaviors including “joy”, “sadness”, “anger”, “surprise”, “disgust” and “fear” are provided and, as the desires, four desires including “exercise”, “affection”, “appetite” and “curiosity” are provided, it should be noted that the present invention is not limited thereto and various kinds and various numbers may be broadly applied to the number and kinds of the emotional behaviors and desires.
  • (4) Fourth Embodiment
  • (4-1) Configuration of Pet Robot 205 According to Fourth Embodiment
  • In FIG. 18, reference numeral 205 generally denotes a pet robot according to a fourth embodiment, which is configured similarly to the pet robot 110 according to the third embodiment except a point that the sensitivity of desires and emotional behaviors is changed in accordance with a circumferential state or actions from a user or the like.
  • Specifically, in the pet robot 110 of the third embodiment, when the emotion or the instinct is updated as mentioned above, the equation (4) or (5) is used. However, an external output (sensor data from various kinds of sensors, image data, audio data, etc.) on which the update at this time depends is greatly dependent upon an environment in which the pet robot 110 is present or a manner by which a user treats the pet robot.
  • For instance, in the case where the user frequently “hits” the pet robot or seldom “pats” the pet robot, the pet robot 110 is brought to a state in which the emotion of “anger” is amplified for most of time, because he is “hit”, so that the emotion of “joy” is not expressed as an action or operation when he is only unexpectedly “patted”. Accordingly, for the pet robot 110 positioned in such a state, the sensitivity of “joy” is raised more than that of “anger” so that the number and kinds of emotions expressed as the actions or operations need to be adjusted not to be biased.
  • Thus, in the pet robot 205 according to the fourth embodiment, the parameter values of each emotional behavior and each desire are respectively separately integrated for a long time, these integrated values are compared with each other between the parameters and the sensitivity of the emotional behavior or the desire is raised or lowered when the rate of the integrated value relative to the total is extremely large or small, so that all emotional behaviors and desires can be equally expressed as actions or operations so as to meet the environment or the manner by which the user treats the pet robot.
  • In practice, in the pet robot 205, the emotion model 207 of an application layer 206 shown in FIG. 22 sequentially calculates the integrated values Ek″ of parameter values for the respective emotional behaviors after a power source is finally turned on and the lapse of time Tall″ after the power source is finally turned on at intervals of prescribed periods ΔT′ (for instance, at intervals of 1 to 2 minutes) in accordance with following mathematical equations.
    E k ″=E k ′+E k (t)′×ΔT′  (7)
    T all ″=T all ′+ΔT′  (8)
  • In the above equations, it is assumed that the parameter value of the emotional behavior at that time is Ek(t)′, the integrated value of the parameter values of the emotion up to that time after the power source is finally turned on is Ek′, and the lapse of time up to that time after the power source is finally turned on is Tall′.
  • Further, the emotion model 207 integrates the integrated values Ek″ of each emotional behavior and the integrated value Tall″ of the lapse of time, upon stop of turning on the power source of the pet robot 205 (upon shut-down of a system), respectively with the corresponding total integrated value Ek(TOTAL) of total integrated value Ek(TOTAL) for each emotional value or the total integrated value Tall(TOTAL) which are stored in respectively prepared files (called them total emotional behavior integrated value files, hereinafter), and stores these total integrated values.
  • Then, the emotion model 207 reads out the total integrated value Tall(TOTAL) of the lapse of time from the total emotional behavior integrated value file, every time the power of the pet robot 205 is turned on. When the total integrated value Tall(TOTAL) exceeds a preset threshold value (for example, 10 hours), the total integrated value Ek(TOTAL) of each emotional behavior stored in the total emotional behavior integrated value file is evaluated. Specifically, the evaluation is executed by calculating the rate of the total integrated value Ek(TOTAL) of the emotional behavior relative to the total value (Σ Ek(TOTAL)) of the total integrated value Ek(TOTAL) of each emotional behavior.
  • Then, when the rate of the total integrated value Ek(TOTAL) of an emotional behavior is lower than a preset threshold value to the emotional behavior the emotion model 207 raises, the value of the coefficient ke′ of the emotional behavior described in the emotion parameter files 200A to 200E of the corresponding “growth stage” of the emotion parameter files 200A to 200E described above referring to FIGS. 28(A) to 28(E) by a prescribed amount (for instance, 0.1). When it is lower than the threshold value, the value of this coefficient ke′ is lowered by a prescribed amount (for instance, 0.1). In such a way, the emotion model 207 adjusts the coefficient ke′ indicating the sensitivity of the emotional behavior.
  • The threshold value can be set with a tolerance to some degree for each emotional behavior so that the individuality of each robot is not injured. In the present embodiment, for instance, as for the emotional behavior of “joy”, 10 [%] to 50 [%] relative to the total value (Σ Ek(TOTAL)) of the total integrated value Ek(TOTAL) of each emotional behavior is set, as for “sadness”, 5 [%] to 20 [%] relative to the total value is set, and as for “anger”, 10 [%] to 60 [%] relative thereto is set.
  • Further, when the emotion model 207 completes the same processing for all the emotional behaviors, the emotion model 207 returns the total integrated value Ek(TOTAL) of each emotional behavior and the total integrated value Tall(TOTAL) of the lapse of time to “0”. Then, while the emotion model 207 changes the parameter values of each emotional behavior in accordance with the equation (4) by using a newly determined coefficient ke′ for each emotional behavior, the emotion model 207 newly starts the integration of the parameter values of each emotional behavior and the lapse of time in accordance with the equation (7), and then, repeats the same processing as mentioned above.
  • As mentioned above, in the pet robot 205, the sensitivity of each emotional behavior is changed, so that all the emotional behaviors can be equally expressed as the actions operations so as to meet the environment or a manner by which the user treats the pet robot.
  • Similarly to the above, the instinct model 208 (FIG. 22) sequentially calculates the integrated values Ik″ of parameter values for respective desires after the power source is finally turned on and sequentially calculates the integrated value Tall″ of the lapse of time after the power source is finally turned on in accordance with the equation (8) at intervals of prescribed periods ΔT′ (for instance, at intervals of 1 to 2 minutes) in accordance with a following mathematical equation.
    I k ″=I K ′+I k (t)′×ΔT′  (9)
  • In the above equation, it is assumed that the parameter value of the desire at that time is Ik(t)′ and the integrated value of the parameter values of the desire up to that time after the power source is finally turned on is Ik′.
  • Further, the instinct model 208 integrates the integrated values Ik″ of each desire and the integrated value Tall″ of the lapse of time, upon stop of turning on the power source of the pet robot 205 respectively with the corresponding total integrated value I k(TOTAL) of total integrated value Ik(TOTAL) for each desire or the total integrated value Tall(TOTAL) of the lapse of time which are stored in respectively prepared files (called them files for integrating desires, hereinafter) and stores these total integrated values.
  • Then, the instinct model 208 reads out the total integrated value Tall(TOTAL) of the lapse of time from the total desire integrated value file, every time the power of the pet robot 205 is turned on. When the total integrated value Tall(TOTAL) exceeds a preset threshold value (for example, 10 hours), the total integrated value Ik(TOTAL) of each desire stored in the total desire integrated value file is evaluated. Specifically, the evaluation is executed by calculating the rate of the total integrated value Ik(TOTAL) of the desire relative to the total value (Σ Ik(TOTAL)) of the total integrated value Ik(TOTAL) of each desire.
  • Then, when the rate of the total integrated value Ik(TOTAL) of a desire is lower than a preset threshold value to the desire, the instinct model 208 raises the value of the coefficient ki′ of the desired described in the desire parameter files 201A to 201E of the corresponding “growth stage” of the desire parameter files 201A to 201E described above referring to FIGS. 30(A) to 30(E) by a prescribed amount (for instance, 0.1). When it is lower than the threshold value, the value of this coefficient ki′ is lowered by a prescribed amount (for instance, 0.1). The threshold value at this time is set with a tolerance to some degree for each desire as mentioned above.
  • Further, when the instinct model 208 completes the same processing for all the desires, the instinct model 208 returns the total integrated value Ik(TOTAL) of each desire stored in the total desire integrated value file and the total integrated value Tall(TOTAL) of the lapse of time to “0”. Then, while the instinct model 208 changes the parameter values of each desire in accordance with the equation (5) by using a newly determined coefficient ki′ for each desire, the instinct model 208 newly starts the integration of the parameter values of each desire and the integration of the lapse of time in accordance with the equation (9), and then, repeats the same processing as mentioned above.
  • As mentioned above, in the pet robot 205, the sensitivity of each desire is changed, so that all the desires can be equally expressed as the actions or operations so as to meet the environment or a manner by which the user treats the pet robot.
  • (4-2) Operation and Effects of Fourth Embodiment
  • In the pet robot 205 with the above mentioned configuration, the parameter values of each emotional behavior and each desire are respectively sequentially integrated and the sensitivity of each emotional behavior and desire are changed at intervals of prescribed periods ΔT′ on the basis of the integrated result.
  • Accordingly, in the pet robot 205, all the desires can be equally expressed as the actions or operations so as to meet the environment and a manner according to which the user treats the pet robot. Therefore, this pet robot 205 has high amusement characteristics, as compared with the pet robot 110 in the third embodiment.
  • With the above described configuration, the parameter values of each emotional behavior and each desire are sequentially integrated and the sensitivity of each emotional behavior and desire are changed at intervals of prescribed periods ΔT′ on the basis of the integrated result. Therefore, all the desires can be equally expressed as the actions or operations so as to meet the environment and a way by which the user treats the pet robot. Thus, the pet robot capable of improving much more an amusement characteristic.
  • (4-3) Other Embodiments
  • In the above described fourth embodiment, although there is described a case in which the present invention is applied to a four-footed walking type pet robot 205, it should be noted that the present invention is not limited thereto, and can be widely applied to, for instance, a two-footed robot or various kinds of other robot apparatuses than it.
  • Further, in the above described fourth embodiment, although there is described a case to which the emotion model 207 and the instinct model 208 are applied as restricting means for restricting the number of emotional behaviors or desires used for generating the action so as to increase stepwise, needless to say, the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise in, for instance, the behavioral model library 180 or the learning module 182.
  • Still further, in the above described fourth embodiment, although there is described a case in which the number of emotional behaviors or desires used for generating the action is restricted so as to increase stepwise on the basis of the externally applied prescribed stimulation (for instance, “hit” or “patted”) and the lapse of time, it should be noted that the present invention is not limited thereto, and the number of emotional behaviors or desires used for generating the action may be restricted so as to increase stepwise, in addition thereto or in place thereof, on the basis of other conditions (for example, a desired action can be successfully done) except the above.
  • Still further, in the above described fourth embodiment, although there is described a case in which the number of emotional behaviors and desires is increased in such order as shown in FIGS. 29(A) or 29(B), the present invention is not limited thereto, and the emotional behaviors or the desires may be increased in other order than the above order.
  • Still further, in the above described fourth embodiment, although there is a described a case in which the number of emotional behaviors and desires is increased in the course of five stages including the “tweety period”, the “baby period”, the “child period”, the “young period”, and the “adult period” so as to meet the “growth” of the action, it should be recognized that the present invention is not limited thereto, and a “growth stage” except them may be provided.
  • Still further, in the above described fourth embodiment, although there is described a case in which the number of emotional behaviors and desires used for generating the action sequentially increases, needless to say, the present invention is not limited thereto, and the number of emotional behaviors and desires used for generating the action may be initially decreased or decreased halfway (for instance, the “growth stage” such as an “old age period” is provided after the “adult period” so that the number of emotions or desires is decreased upon shift of the “growth stage” to the “old age period”).
  • Still further, in the above described fourth embodiment, although there is described a case in which a plurality of behavioral models 180 1 to 180 n are provided in the action library 180 as the action generating means for generating the action on the basis of the respective parameter values of the emotional behaviors and desires and the behavioral models 180 1 to 180 n, it should be recognized that the present invention is not limited thereto, and the action may be generated on the basis of one behavioral model and various kinds of other configurations may be broadly applied to the configuration of the action generating means.
  • Still further, in the above described embodiment, although there is described a case in which the behavioral model changing means for changing the behavioral models 180 k(1) to 180 k(5) (FIG. 26) to the behavioral models 180 k(2) to 180 k(5) high in growth level on the basis of the accumulation of prescribed stimulation (for instance, “hit” or “pat”) and the lapse of time comprises the learning module 182 and the behavioral model library 180, needless to say, the present invention is not limited thereto and various kinds of other configurations can be widely applied.
  • Still further, in the above described fourth embodiment, although there is described a case in which, as the emotional behaviors, six emotional behaviors including “joy”, “sadness”, “anger”, “surprise”, “disgust” and “fear” are provided and, as the desires, four desires including “exercise”, “affection”, “appetite” and “curiosity” are provided, it should be noted that the present invention is not limited thereto and various kinds and various numbers may be broadly applied to the number and kinds of the emotional behaviors and desires.
  • Still further, in the above described fourth embodiment, although there is described a case in which the emotion model 207 and the instinct model 208 as the emotional behavior or desire updating means update the parameter values of the respective emotional behaviors and desires on the basis of the externally applied stimulation and the lapse of time, it should be recognized that the present invention is not limited thereto and the parameter values of the respective emotional behaviors and desires may be updated on the basis of conditions other than the above conditions.
  • Still further, in the above described fourth embodiment, although there is described a case in which the value of the coefficient ke′ or ki′ of the equation (4) or (9) is changed, as a method for respectively updating the sensitivity to each emotional behavior or each desire by the emotion model 207 and the instinct model 208 as the sensitivity changing means, needless to say, the present invention is not limited thereto and the sensitivity to each emotional behavior and each desire may be updated by any method other than the above method.
  • Furthermore, in the above described fourth embodiment, although there is described a case in which the environment is evaluated on the basis of the rate of the total integrated value Ek(TOTAL) or Ik(TOTAL) of the parameter values of the emotional behavior or the desire which are sequentially updated, relative to the total value (E Ek(TOTAL) Σ Ik(TOTAL)) of the total integrated values Ek(TOTAL) or Ik(TOTAL) of the parameter values of all the emotional behaviors or desires, it should be noted that the present invention is not limited thereto and the environment may be evaluated on the basis of the number of times or the frequency of specific external stimulation such as “hit” or “patted” and a variety of methods other than the above methods may be widely applied as a method for evaluating the environment.
  • (5) Fifth Embodiment
  • The fifth embodiment will be described in detail using the accompanying drawings. The present invention is applicable to a pet robot which acts like a quadruped animal.
  • As shown in FIG. 31, a pet robot comprises detecting means 215 which detects an output from another pet robot, character discriminating means 216 which discriminates a character of the other pet robot on the basis of a detection result obtained by the detecting means 215 and character changing means 217 which changes a character on the basis of a discriminating result obtained by the character discriminating means 216.
  • Accordingly, the pet robot discriminates the character of the other robot apparatus by the character discriminating means 216 on the basis of the detection result of the output from the other pet robot obtained by the detecting means 215. The pet robot is capable of changing a character of its own by the character changing means 217 on the basis of the discriminating result of the character of the other pet robot.
  • Though described in detail later, the detecting means 215 detects an emotion or the like expressed by an action of the other pet robot and the character changing means 217 changes the character by changing parameters or the like of an emotion model which determines an action of the pet robot itself deriving from an emotion.
  • The pet robot is capable of changing the character of its own on the basis of an action or the like of the other pet robot as described above.
  • Accordingly, the pet robot has a character which is shaped like that of a true animal and can act on the basis of the character.
  • Now, description will be made of a specific configuration of a pet robot to which the present invention is applied.
  • (5-1) Configuration of Pet Robot
  • A pet robot 210 is configured as a whole as shown in FIG. 32, and composed by coupling a head unit 211 which corresponds to a head, a main body unit 212 which corresponds to a trunk, leg units 213A through 213D which correspond to legs and a tail unit 214 which corresponds to a tail so that the pet robot 210 acts like a true quadruped animal by moving the head unit 211 and the leg units 213A through 213D and the tail unit 214 relative to a main body unit 212.
  • Attached to predetermined locations of the head unit 211 are image recognizing section 220 which correspond to eyes and consist, for example, of CCD (Charge Coupled Device) cameras for picking up an image, microphones 221 which correspond to ears for collecting a voice, and a speaker 222 which corresponds to a mouth for giving sounds. Furthermore, attached to the head unit 211 are a remote controller receiver 223 which receives a command transmitted from a user by way of a remote controller (not shown), a touch sensor 224 which detects touch with a user's hand or the like, and an image display 225 which displays an internally generated image.
  • A battery 231 is attached to a location of the main body unit 212 corresponding to a ventral side and an electronic circuit (not shown) is accommodated in the main body unit 212 for controlling the action of the pet robot 210 as a whole.
  • Joint portions of the leg units 213A through 213D, coupling portions between the leg units 213A through 213D and the main body unit 212, a coupling portion between the main body unit 212 and the head unit 211, a coupling portion between the main body unit 212 and the tail unit 214 and the like are coupled with actuators 233A through 233N respectively which are driven under control by the electronic circuit accommodated in the main body unit 212. By driving the actuators 233A through 233N, the pet robot 210 is let to act like a true quadruped animal as described above while swinging the head unit 211 up, down, left and right, wagging the tail unit 214, walking and running by moving the leg units 213A through 213D.
  • (5-2) Circuit Configuration of Pet Robot 210
  • Description will be made here of a circuit configuration of the pet robot 210 by using FIG. 33. The head unit 211 has a command receiving section 240 which comprises a microphone 221 and a remote controller receiver 223, an external sensor 241 which comprises an image recognizing section 220 and a touch sensor 224, a speaker 222 and an image display 225. Furthermore, the main body unit 212 has a battery 231, and comprises a controller 242 which controls an action of the pet robot 210 as a whole and an internal sensor 245 which comprises a battery sensor 243 for detecting a residual amount of the battery 231 and a heat sensor 244 for detecting heat generated in the pet robot 210. Furthermore, actuators 233A through 233N are disposed at predetermined locations respectively in the pet robot 210.
  • The command receiving section 240 is used for receiving commands given from the user to the pet robot 210, for example, commands such as “walk”, “prostrate” and “chase a ball”, and configured by the remote controller receiver 223 and the microphone 221. When a desired command is input by a user's operation, a remote controller (not shown) transmits an infrared ray corresponding to the above described input command to the remote controller receiver 223. Upon receiving this infrared ray, the remote controller receiver 223 generates a reception signal SIA and sends out this signal to the controller 242. When the user emits sounds corresponding to a desired command, the microphone 221 collects sounds emitted from the user, generates an audio signal SIB and sends out this signal to the controller 242. In response to a command given from the user to the pet robot 210, the command receiving section 240 generates a command signal S1 comprising the reception signal SIA and the audio signal SIB, and supplies the command signal to the controller 242.
  • The touch sensor 224 of the external sensor 241 is used for detecting a spurring from the user to the pet robot 210, for example, spurring of “tapping” or “striking” and when the users makes a desired spurring by touching the above described touch sensor 224, the touch sensor 224 generates a touch detection signal S2A corresponding to the above described spurring and sends out this signal to the controller 242.
  • The image recognizing section 220 of the external sensor 241 is used for detecting a discriminating result of an environment around the pet robot 210, for example, surrounding environment information such as “dark” or “a favorite toy is present” or a movement of another pet robot such as “another pet robot is running”, photographs an image around the above described pet robot 210 and sends out an image signal S2B obtained as a result to the controller 242. This image recognizing section 220 captures an action which expresses an emotion of the other pet robot.
  • The external sensor 241 generates an external information signal S2 comprising the touch detection signal S2A and the image signal S2B in response to external information given from outside the pet robot 210 as described above, and sends out the external information signal to the controller 242.
  • The internal sensor 245 is used for detecting an internal condition of the pet robot 210 itself, for example, an internal condition of “hungry” meaning a lowered battery capacity or “fevered”, and is configured by the battery sensor 243 and the heat sensor 244.
  • The battery sensor 243 is used for detecting a residual amount of the battery 231 which supplies power to each circuit of the pet robot 210 and sends out a battery capacity detection signal S3A to the controller 242 as a detection result. The heat sensor 244 is used for detecting heat in the pet robot 210 and sends out a heat detection signal S3B to the controller 242 as a detection result. The internal sensor 245 generates an internal information signal S3 comprising the battery capacity detection signal S3A and the heat detection signal S3B in correspondence to internal information of the pet robot 210 as described above and sends out the internal information signal S3 to the controller 242 as described above.
  • On the basis of the command signal S1 supplied from the command receiving section 240, the external information signal S2 supplied from the external sensor 241 and the internal information signal S3 supplied from the internal sensor 245, the controller 242 generates control signals S5A through S5N for driving the actuators 233A through 233N and sends out the control signals to the actuators 233A through 233N, thereby making the pet robot 210 act.
  • At this time, the controller 242 generates a audio signal S10 and an image signal S11 to be output outside as occasion demands, and informs required information to the user by outputting the audio signal S10 outside by way of the speaker 222 and sending out the image signal S11 to the image display 225, thereby providing a desired image on the display.
  • (5-3) Data Processing in Controller
  • Now, description will be made of data processing performed in the controller 242. In accordance with programs preliminarily stored in a predetermined memory area used as software, the controller 242 processes the command signal S1 supplied from the command receiving section 240, the external information signal S2 supplied from the external sensor 241 and the internal information signal S3 supplied from the internal sensor 245, and supplies a control signal S5 obtained as a result, to the actuators 23.
  • Functions of the controller 242 for the data processing are classified into an emotion and instinct model section 250 used as emotion and instinct model changing means, a behavior determination mechanism section 251 used as action state determining means, a posture transition mechanism section 252 used as posture transition means and a control mechanism section 253 as shown in FIG. 34, and the controller 242 inputs the command signal S1 supplied from outside, the external information signal S2 and the internal information signal S3 to the emotion and instinct model section 250 and the behavior determination mechanism section 251.
  • The emotion and instinct model section 250 has a basic emotion group 260 which comprises emotion units 260A through 260C adopted as a plurality of independent emotion models and a basic desire group 261 which comprises desire units 261A through 261C adopted as a plurality of independent desire models. In the basic emotion group 260, the emotion unit 260 expresses an emotion of “delight”, the emotion unit 260B expresses an emotion of “sadness” and the emotion unit 260C expresses an emotion of “anger”.
  • The emotion units 260A through 260C express degrees of emotions as intensities, for example, from 0 to 100 levels, and change the intensities of the emotions from one minute to the next on the basis of the command signal S1, the external information signal S2 and the internal information signal S3 which are supplied. Accordingly, the emotion and instinct model section 250 expresses an emotion state of the pet robot 210 by combining the intensities of the emotion units 260A through 260C which change from one minute to the next, thereby modeling changes with time of the emotions.
  • In the basic desire group 261, the desire unit 261A expresses a desire of “appetite”, the desire unit 261B expresses a desire of “desire for sleep” and the desire unit 261C expresses a desire of “desire for movement”. Like the emotion units 260A through 260C, the desire unit 261A through 261C express degrees of desires as intensities, for example, from 0 to 100 levels, and change the intensities of the desires from one minutes to the next on the basis of the command signal S1, the external information signal S2 and the internal information signal S3 which are supplied. Accordingly, the emotion and instinct model section 250 expresses an instinct state of the pet robot 210 by combining the intensities of the desire units 261A through 261C which change from one minutes to the next, thereby modeling changes with time of the instincts.
  • The emotion and instinct model section 250 changes the intensities of the emotion units 260A through 260C and the desire units 261A through 261C as described above on the basis of input information S1 through S3 which comprises the command signal S1, the external information signal S2 and the internal information signal S3. The emotion and instinct model section 250 determines the emotion state by combining the changed intensities of the emotion units 260A through 260C, determines the instinct state by combining the changed intensities of the desire units 261A through 261C, and sends out determined emotion state and instinct state to the behavior determination mechanism section 251 as emotion and instinct state information S10.
  • The emotion and instinct model section 250 combines emotion units which are desired in the basic emotion group 260 so as to restrain or stimulate each other, thereby changing an intensity of one of the combined emotion units when an intensity of the other emotion unit is changed and realizing the pet robot 210 which has natural emotions.
  • Concretely speaking, the emotion and instinct model section 250 combines the “delight” emotion unit 260A with the “sadness” emotion unit 260B so as to restrain each other as shown in FIG. 36, thereby enhancing an intensity of the “delight” emotion unit 260A when the pet robot 210 is praised by the user and lowering an intensity of the “sadness” emotion unit 260B as the intensity of the “delight” emotion unit 260A is enhanced even when the input information S1 through S3 which changes the intensity of the “sadness” emotion unit 260B is not supplied at this time. Similarly, the emotion and instinct model section 250 naturally lowers an intensity of the “delight” unit 260A as an intensity of the “sadness” emotion unit 260B is enhanced when the intensity of the “sadness” emotion unit 260B is enhanced.
  • Furthermore, the emotion and instinct model section 250 combines the “sadness” emotion unit 260B with the “anger” emotion unit 260C so as to stimulate each other, thereby enhancing an intensity of the “anger” emotion unit 260C when the pet robot is struck by the user and enhancing an intensity of the “sadness” emotion unit 260B as the intensity of the “anger” emotion unit 260C is enhanced even when the input information S1 through S3 which enhances the intensity of the “sadness” emotion unit 260B is not supplied at this time. When an intensity of the “sadness” emotion is enhanced, the emotion and instinct model section 250 similarly enhances an intensity of the “anger” emotion unit 260C naturally as the intensity of the “sadness” emotion unit 260B is enhanced.
  • As in a case where the emotion units 260 are combined with each other, the emotion and instinct model section 250 combines desire units desired in the basic desire group 261 so as to restrain or stimulate each other, thereby changing an intensity of one of the combined desire units when an intensity of the other desire unit is changed and realizing the pet robot 210 which has natural instincts.
  • Returning to FIG. 34, action information S12 representing a current action or a past action of the pet robot 210 itself, for example, “walked for a long time” is supplied to the emotion and instinct model section 250 from the behavior determination mechanism section 251 disposed at a later stage so that the emotion and instinct model section 250 generates emotion and instinct state information S10 which is different dependently on an action of the pet robot represented by the above described action information S12 even when identical input information S1 through S3 is given.
  • Specifically, the emotion and instinct model section 250 has intensity increase/decrease functions 265A through 265C which are disposed at a stage before the emotion units 260A through 260C as shown in FIG. 37 and generate intensity information S14A through S14C to enhance and/or lower intensities of the emotion units 260A through 260C on the basis of the action information S12 representing an action of the pet robot 210 and the input information S1 through S3, and enhances and/or lowers the intensities of the emotion units 260A through 260C in correspondence to the intensity information S14A through S14C output from the above described intensity increase/decrease functions 265A through 265C.
  • The emotion and instinct model section 250 enhances an intensity of the “delight” emotion unit 260A, for example, when the pet robot makes a courtesy to the user and is tapped on the head, that is, when the action information S12 representing a courtesy to the user and the input information S1 through S3 representing being tapped on the head are given to the intensity increase/decrease function 55A, whereas the emotion and instinct model section 250 does not change the intensity of the “delight” emotion unit 260A even when the pet robot is tapped on the head during execution of some task, that is, when the action information S12 representing execution of a task and the input information S1 through S3 representing being tapped on the head are given to the intensity increase/decrease function 265A.
  • The emotion and instinct model section 250 determines intensities of the emotion units 260A through 260C while referring not only to the input information S1 through S3 but also to the action information S12 representing the current or past action of the pet robot 210, thereby being capable of preventing an unnatural emotion from arising which enhances an intensity of the “delight” emotion unit 260A, for example, when the user taps on the head for mischief during execution of some task. By the way, the emotion and instinct model section 250 is configured to similarly enhance and/or lower intensities also of the desire units 261A through 261C respectively on the basis of the input information S1 through S3 and the action information S12 which are supplied.
  • When the input information S1 through S3 and the action information S12 are input, the intensity increase/decrease functions 265A through 265C generate and output the intensity information S14A through S14C as described above dependently on parameters which are preliminarily set, thereby making it possible to breed the above described pet robot 210 so as to be a pet robot having an individuality, for example, a pet robot liable to get angry or a pet robot having a cheerful character by setting the above described parameters of the pet robot 210 at different values.
  • Furthermore, parameters of the emotion model can be changed dependently on a character of another robot apparatus (hereinafter referred to as a mate robot). In other words, a character of the pet robot 210 is changed through interactions with the mate robot, thereby shaping the character with a characteristic such as “He that touches pitch shall be defiled”. It is therefore possible to realize the pet robot 210 having a character which is formed naturally.
  • Specifically, it is possible to form a character which is adapted to the mate robot apparatus by disposing an emotion recognition mechanism section 271, a memory and analysis mechanism section 272, and a parameter change mechanism section 273 as shown in FIG. 38.
  • The emotion recognition mechanism section 271 recognizes whether or not an action of the mate robot expresses some emotion as well as a kind and an intensity of an emotion when the action expresses an emotion. Specifically, the emotion recognition mechanism section 271 comprises a sensor which detects an action of the mate robot and an emotion recognizing section which recognizes an emotion of the mate robot on the basis of a sensor input from the sensor, captures the action of the mate robot with the sensor and recognizes the emotion of the mate robot from the sensor input with the emotion recognizing section. The sensor input is, for example, the external information signal S2 out of the above described input signals and may be the audio signal S1B from the microphone 221 shown in FIG. 33 or the image signal S2B from the image recognizing section 220 shown in FIG. 33.
  • Speaking of recognition of an emotion, the emotion recognition mechanism section 271 recognizes an emotion expressed by sounds emitted from the mate robot or an action of the mate robot which is used as the sensor input.
  • Specifically, the pet robot 210 has an action pattern for actions deriving from emotions of the mate robot as information and compares this action pattern with an actual action of the mate robot, for example, a movement of a moving member or an emitted sound, thereby recognizing an emotion expressed by an action of the mate robot. For example, the pet robot 210 has an action pattern for a movement of a foot of the mate robot when the above described mate robot is angry and detects an angry state of the mate robot when a movement of the foot of the mate robot which is obtained with the image recognizing section 220 is coincident with the action pattern.
  • Actions of the pet robot 210 have been determined, for example, on the basis of a preliminarily registered emotion model. In other words, the actions of the pet robot 210 result from expressions of emotions. On a premise that the mate robot also has a similar emotion model, the pet robot 210 is capable of comprehending an action as a result of what emotion of the mate robot from the action pattern for the mate robot. By comparing the action information held by the pet robot itself with motion information of the mate robot, the pet robot is easily capable of comprehending emotions of the mate robot.
  • By recognizing emotions as described above, the pet robot 210 is capable of recognizing that the mate robot is angry when the pet robot 210 detects an angry action, for example, an angry walking manner or an angry eye.
  • The emotion recognition mechanism section 271 sends information of an emotion expressed by the mate robot which is recognized as described above to the memory and analysis mechanism section 272.
  • On the basis of the information sent from the emotion recognition mechanism section 271, the memory and analysis mechanism section 272 judges a character of the mate robot, for example, an irritable character or a pessimistic character. Specifically, the memory and analysis mechanism section 272 stores the information sent from the emotion recognition mechanism section 271 and analyzes a character of the mate robot on the basis of a change of the information within a certain time.
  • Specifically, the memory and analysis mechanism section 272 takes out information within the certain time from information stored in a data memory (not shown) and analyzes an emotion expression ratio. When information of an emotion related to “anger” is obtained at a ratio shown in FIG. 39, for example, the memory and analysis mechanism section 272 judges that the mate robot has a character which is liable to be angry.
  • The memory and analysis mechanism section 272 sends information of a character of the mate robot obtained as described above to the emotion parameter change mechanism section 273.
  • The emotion parameter change mechanism 63 changes parameters of the emotion model (specifically, the emotion and instinct model section) 250. Concretely speaking, the emotion parameter change mechanism section 273 changes parameters of the emotion units related to the emotions of “anger” and “sadness”.
  • Furthermore, the parameters of the above described intensity increase/decrease functions 265A through 265C may be changed as the parameters related to the emotions. Since the intensity information S14A through S14C is generated from the input information S1 through S3 and the action information S12 in accordance with the changed parameters of the intensity increase/decrease functions 265A through 265C in this case, it is possible to breed the pet robot 210 so as to have, for example, a character which is liable to be angry or cheerful.
  • Input into the emotion model section 250 are sensor inputs which are the input information S1 through S3 and the action information S12, and output to the behavior determination mechanism section 251 is emotion state information S10 a which are to be used as emotion values corresponding to the parameters (intensities) changed by the emotion parameter change mechanism section 273. The behavior determination mechanism section 251 disposed at a later stage determines an action of the pet robot 210 on the basis of the emotion values (emotion state information S10 a), thereby expressing a character through the action.
  • The pet robot 210 changes the parameters (intensities) of the emotion model in correspondence to a character of the mate robot as described above, thereby naturally forming a character. When the pet robot 210 is in contact with another robot which is liable to be angry, for example, the pet robot 210 increases parameters of a liability to be angry of its own and makes itself liable to be angry.
  • Returning to FIG. 34, the behavior determination mechanism section 251 determines a next action on the basis of input information S14 which comprises the command signal S1, the external information signal S2, the internal information signal S3, the emotion and instinct state information S10 and the action information S12, and sends out contents of the above described determined action to the posture transition mechanism section 252 as action command information S16.
  • Specifically, the behavior determination mechanism section 251 uses an algorithm called finite automaton 267 which has a finite number of action states (hereinafter referred to as states) expressing a history of the input information S14 supplied in the past as shown in FIG. 40, and determines a next action by making transition of a state at that time to another state on the basis of the input information S14 which is currently supplied and the above described state at that time. The behavior determination mechanism section 251 makes transition of a state as described above each time the input information S14 is supplied and determines an action dependently on the above described state to which the transition is made, thereby determining the action referring not only to the current input information S14 but also to the past input information S14.
  • Accordingly, the behavior determination mechanism section 251 makes transition to a state ST5 of “standing”, for example, when the input information S14 of “pet robot has lost sight of a ball” is supplied at a state ST1 of “pet robot is chasing a ball”, whereas the behavior determination mechanism section 251 makes transition to a state ST4 of “standing” when the input information S14 of “get up” is supplied at a state ST2 of “lying”. It will be understood that actions which are identical at these states ST4 and ST5 are classified as different states due to different histories of the past input information S14.
  • The behavior determination mechanism section 251 actually makes transition of a current state to a next state when predetermined trigger is made. Concrete examples of the trigger are a definite time reached by an execution time of an action at a current state, input of specific input information S14, and a predetermined threshold value exceeded by an intensity of a unit desired out of intensities of the emotion units 260A through 260C and the desire units 261A through 261C which is represented by the emotion and instinct state information S10 supplied from the emotion and instinct model section 250.
  • At this time, the behavior determination mechanism section 251 selects a transition destination state on the basis of whether or not a predetermined threshold value is exceeded by an intensity of a unit desired out of intensities of the emotion units 260A through 260C and the desire units 261A through 261C which are represented by the emotion and instinct state information S10 supplied from the emotion and instinct model section 250. Accordingly, the behavior determination mechanism section 251 is configured to make transition to a state which is different dependently on intensities of the emotion units 260A through 260C and the desire units 261A through 261C even when an identical command signal S1 is input.
  • When the behavior determination mechanism section 251 detects a palm stretched before the eye on the basis of the supplied external information signal S2, detects an intensity of the “anger” emotion unit 260C not higher than the predetermined threshold value on the basis of the emotion and instinct state information S10 and detects “pet robot is not hungry”, that is, a battery voltage not lower than a predetermined threshold value on the basis of the internal information signal S3, for example, the behavior determination mechanism section 251 therefore generates action command information S16 for allowing the pet robot to take an action of “hand lending” in correspondence to the palm stretched before the eye and sends out this information to the posture transition mechanism section 252.
  • Furthermore, when the behavior determination mechanism section 251 detects a palm stretched before the eye, detects an intensity of the “anger” emotion unit 260C not higher than the predetermined threshold value and “pet robot is hungry”, that is, a battery voltage lower than the predetermined threshold value, for example, the behavior determination mechanism section 251 generates the action command information S16 for allowing the pet robot to take an action of “licking up a palm” and sends out this information to the posture transition mechanism section 252.
  • Furthermore, when the behavior determination mechanism section 251 detects a palm stretched before the eye and an intensity of the “anger” emotion unit 260C not lower than the predetermined threshold value, for example, the behavior determination mechanism section 251 generates the action command information S16 which allows the pet robot to take an action of “looking aside in anger” whether or not “pet robot is not hungry”, that is, whether or not the battery voltage is not lower than the predetermined threshold value. When the parameters of the emotion model (intensities of the emotion units) are changed in correspondence to the mate robot which has a character liable to get angry, for example, intensities of the emotion model often exceed the predetermined threshold value and the pet robot 210 often takes the action of “looking aside in anger”
  • On the basis of an intensity of a unit desired out of intensities of the emotion units 260A through 260C and the desire units 261A through 261C represented by the emotion and instinct state information S10 supplied from the emotion and instinct model section 250, the behavior determination mechanism section 251 determines parameters of an action to be taken at the transition destination state, for example, a walking speed, magnitudes and speeds of movements of the hands and feet, a pitch and loudness of a sound to be emitted or the like, generates the action command information S16 corresponding to the parameters of the above described action, and sends out this information to the posture transition mechanism section 252.
  • By the way, the input information S1 through S3 which comprises the command signal S1, the external information signal S2 and the internal information signal S3 is input not only into the emotion and instinct model section 250 but also into the behavior determination mechanism section 251 since this information has contents which are different dependently on timings of input into the emotion and instinct model section 250 and the behavior determination mechanism section 251.
  • When the external information signal S2 of “tapped on the head” is supplied, for example, the controller 242 generates the emotion and instinct state information S10 of “delight” with the emotion and instinct model section 250 and supplies the above described emotion and instinct state information S10 to the behavior determination mechanism section 251, but when the external information signal S2 of “a hand is present before the eye” is supplied in this state, the controller 242 generates action command information S16 of “willing to lend a hand” in the behavior determination mechanism section 251 on the basis of the above described information S10 of “delight” and the external information signal S2 of “a hand is present before the eye, and sends out the action command information S16 to the posture transition mechanism section 42.
  • Returning to FIG. 34, the posture transition mechanism section 252 generates posture transition information S18 for transition from a current posture to a next posture on the basis of the action command information S16 supplied from the behavior determination mechanism section 251 and sends out the information S18 to the control mechanism section 253. In this case, a posture to which transition is possible from the current posture is determined, for example, dependently on physical forms of the pet robot 210 such as forms and weights of the body, hands and feet, coupled conditions of component members and mechanisms of the actuators 233A through 233N such as bending directions and angles of the joints.
  • Postures to which transition is possible are classified into those to which direct transition is possible from a current posture and others to which the direct transition is impossible from the current posture. From a lying down state of the quadruped pet robot 210 with the four feet largely thrown out, for example, the direct transition is possible to a prostrating state, but impossible to a standing state, and the pet robot 210 must take actions in two stages of once drawing the hands and feet near the body and then standing up. Furthermore, there are postures which cannot be taken safely. The quadruped pet robot 210 easily falls down, for example, when the pet robot 210 attempts to give a hurrah with two forefeet raised in a standing posture.
  • When postures to which transition is possible are preliminarily registered and the action command information S16 supplied from the behavior determination mechanism section 251 indicates a posture to which the direct transition is possible, the posture transition mechanism section 42 sends out the above described action command information S16 as posture transition information S18 without modification to the control mechanism section 253, but when the action command information S16 indicates a posture to which direct transition is impossible, the posture transition mechanism 252 generates the posture transition information S18 which causes transition once to another posture to which transition is possible and then transition to a target posture, and sends out the information to the control mechanism section 253. Accordingly, the pet robot 210 is capable of avoiding an event of an unreasonable attempt to take a posture to which transition is impossible or an event of falling down.
  • Specifically, the posture transition mechanism section 42 is configured to preliminarily register postures which the pet robot 210 can take and record postures between which the transition is possible. The posture transition mechanism section 252 uses an algorithm called directed graph 270 which expresses postures which the pet robot 210 can take as nodes ND1 through ND5 and connects the postures between which the transition is possible, that is, the nodes ND1 through ND5 with directed arcs a1 through a10, for example, as shown in FIG. 41.
  • When the action command information S16 is supplied from the behavior determination mechanism section 251, the posture transition mechanism section 252 plans posture transition by searching for a path from a node ND corresponding to a current posture to a node ND corresponding to a posture to be taken next which is indicated by the action command information S16 in directions indicated by the directed arcs a so as to connect the current node ND to the next node ND and sequentially recording nodes ND existing in the searched path. Accordingly, the pet robot 210 is capable of realizing an action indicated by the behavior determination mechanism section 251 while avoiding the event to make the unreasonable attempt to take the posture to which the transition is impossible or the event of falling down.
  • When the action command information S16 of “sit down” is supplied in a node ND2 indicating a current posture of “prostration”, for example, the posture transition mechanism section 252 gives the posture transition information S18 of “sit down” to the control mechanism section 253 utilizing a fact that the direct transition is possible from the node ND2 indicating the posture of “prostration” to a node ND5 indicating a posture of “sitting down”. When the action command information S16 of “walk” is supplied, in contrast, the posture transition mechanism section 252 plans posture transition by searching for a path from the node ND2 of “prostration” to a node ND4 of “walking”, generates the action command information S18 which emits a command of “stand up” and then a command of “walk”, and sends out the action command information S18 to the control mechanism section 253.
  • Returning to FIG. 34, the control mechanism section 253 is configured to allow the pet robot 210 to take a desired action by generating the control signal S5 for driving the actuators 233A to 233N on the basis of the action command information S18 and sending out this signal to the actuators 233A to 233N to drive the actuators 233A to 233N.
  • (5-4) Operations and Effect
  • In the above described configuration, the emotion and instinct model section 250 changes states of an emotion and an instinct of the pet robot 210 on the basis of the supplied input information S1 through S3 and reflects changes of the states of the emotion and the instinct on an action of the pet robot 210, thereby allowing the pet robot 210 to autonomously act on the basis of the states of emotions and instincts of its own.
  • The emotion and instinct model section (emotion model section shown in FIG. 38) 250 is capable of determining an action dependently on a character which is changed in correspondence to a character of the mate robot. Accordingly, the user can enjoy a character formation process of the pet robot 210 in correspondence to another robot and gain an interest in breeding.
  • Furthermore, the behavior determination mechanism section 251 of the controller 242 determines a next state successive to a current state on the basis of the a current state corresponding to a history of the input information S14 which is supplied sequentially and the input information S14 which is supplied next, thereby allowing the pet robot 210 to autonomously act on the basis of states of the emotions and instincts of its own.
  • Furthermore, the posture transition mechanism section 252 of the controller 242 makes transition from a current posture of the pet robot 210 to a posture corresponding to the action command information S16 by changing the current posture through a predetermined path, thereby avoiding the event to take an unreasonable posture or the event of falling down.
  • The above described configuration changes states of an emotion and an instinct of the pet robot 210 on the basis of the input information S1 through S3 supplied to the controller 242, determines an action of the pet robot 210 on the basis of changes of states of the emotion and the instinct, selects a posture to which transition is possible dependently on the above described determined action and moves the pet robot 210, thereby making it possible to realize the pet robot 210 which is capable of autonomously acting on the basis of emotions and instincts of its own, and taking actions quite similar to those of a true pet.
  • (5-5) Other Embodiments
  • Note that, in the aforementioned fifth embodiment, though the emotion parameter change mechanism section 273 changes a character in correspondence to a character of the mate robot by modifying the parameters of the emotion model in the above described embodiment, the present invention is not limited by the embodiment and the character can be changed by modifying parameters of the behavior determination mechanism section 251 as shown in FIG. 42. In this case, adjustable parameters of the behavior determination mechanism section 251 are changed. For example, transition probabilities are changed though the finite automaton makes transition of states by supplying the input information S14.
  • Further, in the aforementioned fifth embodiment, though a character state is changed only on the basis of an emotion expressed by the mate robot in the above described embodiment, the present invention is not limited by the embodiment and reference can be made also to other information. The character state can be changed, for example, by disposing a dialogue analysis mechanism section 274 as shown in FIG. 43 and analyzing dialogue between the mate robot and the user (owner).
  • The dialogue analysis mechanism section 274 analyzes the dialogue between the user and the mate robot, for example, a language emitted from the user to the mate robot or a gesture shown by the user to the mate robot. The dialogue analysis mechanism section 274 analyzes, for example whether the user is scolding or striking the mate robot or the like. The dialogue analysis mechanism section 251 sends an analysis result to the memory and analysis mechanism section 272.
  • The dialogue analysis mechanism section 274 discriminates a character of the mate robot on the basis of information of an emotion expressed by the mate robot which is supplied from the emotion recognition mechanism section 271 and the analysis result which is obtained with the dialogue analysis mechanism section 274.
  • Though the mate robot will be judged to have character which is liable to be angry in a case where only barking information is sent as information from the emotion recognition mechanism section 271, reference to the dialogue with the user prevents the mate robot from being easily judged to have the character which is liable to be angry in a case where the mate robot is struck by the user and barking, thereby making it possible to perform composite judgement dependently on environments.
  • Further, in the aforementioned fifth embodiment, though a character of the pet robot 210 is changed so as to match with a character of the mate robot in the above described embodiment, the present invention is not limited by the embodiment and it is possible to change the character of the pet robot 210 to as to be opposed to the character of the mate robot, that is, in a reverse direction.
  • Further, in the aforementioned fifth embodiment, though reference is made to an action of the mate robot to judge a character of the mate robot in the above described embodiment, the present invention is not limited by the embodiment and it is possible to judge a character of the mate robot through data communication with the mate robot. The data communication may be radio communication or communication through wire.
  • Further, in the aforementioned fifth embodiment, though the mate robot is single in the above described embodiment, the present invention is not limited by the embodiment and the mate robot may be in a plurality. In this case, the pet robot 210 is capable of discriminating robots and changing a character of its own collectively from the plurality of robots or individually from specific robots. Accordingly, the pet robot 210 is capable of changing the character of its own dependently on characters of the plurality of mate robots when these robots have characters which are different from one another.
  • Under a circumstance where a plurality of robots exists as described above, it is further necessary to discriminate individual robots. In this case, faces are patterned as predetermined so that the pet robot 210 is capable of discriminating the robots individually. For example, the so-called bar codes may be attached to the mate robots as predetermined marks so that the pet robot 210 can discriminate the plurality of robots individually. In a case where characters of the mate robots are to be judged through the data communication, discriminating information may be attached to character data so that the pet robot 210 can discriminate the plurality of robots. The discriminating information can be, for example, discriminating numbers of devices of the mate robots, for example, IDs of the so-called memory sticks (trade mark in SONY Co.) which are memory media configured to be detachable.
  • Further, in the aforementioned fifth embodiment, though description is made mainly of emotion recognition from actions of movable members of the mate robot in the above described embodiment, the present invention is not limited by the embodiment and it is possible to recognize an emotion of the mate robot from a contact state of mate robot with the touch sensor 224.
  • Further, in the aforementioned fifth embodiment, though the pet robot 210 receives a user's command sent from the remote controller with the infrared ray, the present invention is not limited by the embodiment and the pet robot 210 may be configured to receive a user's command sent, for example, with a radio wave or an acoustic wave.
  • Further, in the aforementioned fifth embodiment, though the user's command is input through the command receiving section 240 which comprises the remote controller receiver 223 and the microphone 221 in the above described embodiment, the present invention is not limited by the embodiment and it is possible, for example, to connect a computer to the pet robot 210 and input a user's command via the above described connected computer.
  • Further, in the aforementioned fifth embodiment, though states of an emotion and an instinct are determined using the emotion units 260A through 260C expressing the emotions of “delight”, “sadness” and “anger” as well as the desire units 261A through 261C expressing the desires of “appetite”, “desire for sleep” and “desire for movement” in the above described embodiment, the present invention is not limited by the embodiment and it is possible to add an emotion unit expressing an emotion of “loneliness” to the emotion units 260A through 260C and add an desire unit expressing “desire for love” to the desire units 261A through 261C or determine states of an emotion and an instinct using a combination of other various kinds of emotion units and desire units.
  • Further, in the aforementioned fifth embodiment, though a next action is determined by the behavior determination mechanism section 251 on the basis of the command signal S1, the external information signal S2, the internal information signal S3, the emotion and instinct state information S10 and the action information S12 in the above described embodiment, the present invention is not limited by the embodiment and a next action may be determined on the basis of some information out of the command signal S1, the external information signal S2, the internal information signal S3, the emotion and instinct state information S10 and the action information S12.
  • Further, in the aforementioned fifth embodiment, though a next action is determined using the algorithm called finite automaton 267 in the above described embodiment, the present invention is not limited by the embodiment and an action may be determined by using an algorithm called state machine which has states in a number which is not finite, and in this case, a new state is to be generated each time the input information S14 is supplied and an action is to be determined in accordance with the above described generated state.
  • Further, in the aforementioned fifth embodiment, though the next action is determined using the algorithm called finite automaton 267 in the above described embodiment, the present invention is not limited by the embodiment and an action may be determined using an algorithm called probability finite automaton which selects a plurality of states as prospective transition destinations on the basis of the input information S14 currently supplied and a state at that time, and determines a transition destination state out of the selected plurality of states at random using random numbers.
  • Further, in the aforementioned fifth embodiment, though the action command information S16 is sent with no modification to the control mechanism section 253 as the posture transition information S18 when the action command information S16 indicates a posture to which direct transition is possible or the posture transition information S18 for transition once to a posture to which transition is possible and then to a target posture is generated and sent to the control mechanism section 253 when the action command information S16 indicates a posture to which the direct transition is impossible in the above described embodiment, the present invention is not limited by the embodiment and it is possible to configure to accept the action command information S16 and send this information to the control mechanism section 43 only when the action command information S16 indicates a posture to which direct transition is possible, and refuse the above described action command information S16 when this information indicates a posture to which the direct transition is impossible.
  • Further, in the aforementioned fifth embodiment, though the present invention is applied to the pet robot 210 in the above described embodiment, the present invention is not limited by the embodiment and the present invention is applicable to other various kinds of robots, for example, robot apparatuses which are used in entertainment fields such as games and exhibitions.
  • Furthermore, an appearance of the pet robot 210 to which the present invention is applied is not necessarily configured as shown in FIG. 32 but may be configured as shown in FIG. 44 so as to resemble to a real dog.
  • INDUSTRIAL APPLICABILITY
  • The present invention is applied to robots which are used in an entertainment field such as a game and a exhibition, pet robots which are used as pets, and the like.

Claims (28)

1. A robot apparatus comprising:
memory means for storing behavioral models; and
action generating means for generating action by the use of partial or full state space of said behavioral model, and wherein
said action generating means changes said state space to be used for said action generation, of said behavioral models while expanding or reducing said state space.
2. The robot apparatus according to claim 1, wherein:
said behaviorall models consists of probability state transition models; and
said action generating means changes said transition probability to a transition-prohibited state, to a predetermined value higher than 0, by setting said transition probability of said behavioral models to 0, thereby said state space to be used for said action generation of said behavioral models is expanded.
3. The robot apparatus according to claim 1, wherein:
said behavioral model consists of probability state transition models; and
said action generating means sets transition probability to a target state to 0 to thereby reduce said state space to be used for said action generation of said behavioral models.
4. The robot apparatus according to claim 1, having
growth models which grow stepwise, and wherein
said action generating means changes state space to be used for said action generation of said behavioral models in accordance with said growth of said growth models while expanding or reducing it.
5. A robot apparatus, having behavioral models comprising state transition models and for generating action on the basis of said behavioral model, wherein:
in said behavioral model, transition to a predetermined node is described as transition to a virtual node consisting of imaginary nodes, and a predetermined node group is allocated to said virtual node, and
changing means for changing said node group to be allocated to said virtual node is provided.
6. The robot apparatus according to claim 5, having
growth models which grow stepwise, and wherein
said changing means changes said node group to be allocated to said virtual node, in accordance with said growth of said growth models.
7. A control method for a robot apparatus, having behavioral models, and for generating action on the basis of said behavioral model, comprising:
a first step of generating said action by the use of partial or full state space of said behavioral model; and
a second step of changing said state space to be used for said action generation, of said behavioral models while expanding or reducing said state space.
8. The control method for a robot apparatus according to claim 7, wherein:
said behavioral models comprise probability state transition models; and
in said second step, by setting transition probability of said behavioral models to 0, said transition probability to a transition-prohibited state is changed to a predetermined value higher than 0, thereby said state space to be used for said action generation of said behavioral models is be expanded.
9. The control method for a robot apparatus according to claim 7, wherein:
said behavioral models comprises probability state transition models; and
in said second step, transition probability to a target state is set to 0 to thereby reduce said state space to be used for said action generation of said behavioral models.
10. The control method for a robot apparatus according to claim 7, wherein:
said robot apparatus has growth models which grow stepwise; and
in said second step, state space to be used for said action generation of said behavioral models is changed in accordance with said growth of said growth models while expanding or reducing it.
11. A control method for a robot apparatus, having behavioral models comprising state transition models, and for generating action on the basis of said behavioral models, comprising:
a first step of describing transition to a predetermined node in said behavioral models as transition to a virtual node consisting of imaginary nodes, and allocating a predetermined node group to said virtual node; and
a second step of changing said node group to be allocated to said virtual node.
12. The control method for a robot apparatus according to claim 11, wherein:
said robot apparatus has growth models which grow stepwise; and
in said second step, said node group to be allocated to said virtual node is changed in accordance with said growth of said growth models.
13-32. (canceled)
33. A robot apparatus, comprising:
detecting means for detecting an output from another robot apparatus; and
character discriminating means for discriminating a character of said another robot apparatus on the basis of a result detected by said detecting means.
34. The robot apparatus according to claim 33, comprising
character changing means for changing own character on the basis of the result detected by said character discriminating means.
35. The robot apparatus according to claim 33, wherein:
said detecting means comprises:
an action detecting section for detecting an action of said another robot apparatus; and
emotion recognizing means for recognizing an emotion of said another robot apparatus on the basis of a result detected by said action detecting means; and wherein
said character discriminating means discriminates a character
of said another robot apparatus on the basis of said emotion recognized by said emotion recognizing means.
36. The robot apparatus according to claim 35, wherein
said character discriminating means discriminates a character of said another robot apparatus on the basis of said emotion within a definite Lime, which is recognized by said emotion recognizing means.
37. The robot apparatus according to claim 35, wherein:
said detecting means detects emotion data or character data from said another robot apparatus; and
said character discriminating means discriminates a character of said another robot apparatus on the basis of said emotion data or character data.
38. The robot apparatus according to claim 34, wherein
said character changing means changes a parameter of a character model which determines own character, on the basis of a result discriminated by said character discriminating means.
39. The robot apparatus according to claim 33, comprising
action control means for moving the robot apparatus as a whole and component members, on the basis of action information, and wherein
said character changing means changes said action information on the basis of a result discriminated by said character discriminating means.
40. The robot apparatus according to claim 35, comprising
memory means for storing action patterns deriving from an emotion of another robot apparatus, and wherein
said emotion recognizing means recognizes an emotion by comparing an action of said another robot apparatus with said action pattern.
41. The robot apparatus according to claim 33, comprising
dialogue detecting means for detecting a dialogue between another robot apparatus and a user, and wherein
said character discriminating means discriminates a character of said another robot apparatus by referring to a result detected by said dialogue detecting means.
42. A character discriminating method for robot apparatus, comprising:
a detecting step for detecting an output from a robot apparatus to discriminate a character of said robot apparatus on the basis of a detected result.
43. The character discriminating method for robot apparatus according to claim 42, wherein
a character discriminating result is used for changing a character of another robot apparatus.
44. The character discriminating method for robot apparatus according to claim 43, wherein
an emotion is recognized from an action of said robot apparatus which is an output from said robot apparatus, to discriminate a character of said robot apparatus on the basis of a recognition result of the emotion.
45. The character discriminating method for robot apparatus according to claim 44, wherein
the character of said robot apparatus is discriminated on the basis of a recognition result of said emotion within a definite time.
46. The character discriminating method for robot apparatus according to claim 42, wherein
emotion data or character data from said robot apparatus which is an output from said robot apparatus is detected, to discriminate a character of said robot apparatus on the basis of said emotion data or character data.
47. The character discriminating method for robot apparatus according to claim 44, wherein
another robot apparatus which stores action patterns deriving from an emotion of said robot apparatus recognizes an emotion by comparing an action of said robot apparatus with said action pattern.
US11/244,341 1999-11-30 2005-10-05 Robot apparatus and control method therefor, and robot character discriminating method Abandoned US20060041332A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/244,341 US20060041332A1 (en) 1999-11-30 2005-10-05 Robot apparatus and control method therefor, and robot character discriminating method

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
JP11-341207 1999-11-30
JP34120699A JP2001157977A (en) 1999-11-30 1999-11-30 Robot device, and control method thereof
JP34137599A JP2001157983A (en) 1999-11-30 1999-11-30 Robot device and character determining method of robot device
JP34120799A JP2001157978A (en) 1999-11-30 1999-11-30 Robot device, and control method thereof
JP11-341206 1999-11-30
JP11-341375 1999-11-30
US09/890,231 US7117190B2 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus
PCT/JP2000/008472 WO2001039932A1 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus
US11/244,341 US20060041332A1 (en) 1999-11-30 2005-10-05 Robot apparatus and control method therefor, and robot character discriminating method

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US09/890,231 Continuation US7117190B2 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus
PCT/JP2000/008472 Continuation WO2001039932A1 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus

Publications (1)

Publication Number Publication Date
US20060041332A1 true US20060041332A1 (en) 2006-02-23

Family

ID=27340991

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/890,231 Expired - Fee Related US7117190B2 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus
US11/244,341 Abandoned US20060041332A1 (en) 1999-11-30 2005-10-05 Robot apparatus and control method therefor, and robot character discriminating method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/890,231 Expired - Fee Related US7117190B2 (en) 1999-11-30 2000-11-30 Robot apparatus, control method thereof, and method for judging character of robot apparatus

Country Status (4)

Country Link
US (2) US7117190B2 (en)
KR (1) KR20010101883A (en)
CN (1) CN1151016C (en)
WO (1) WO2001039932A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080319930A1 (en) * 2007-06-11 2008-12-25 Sony Corporation Information processing apparatus and method, and program
US20090104844A1 (en) * 2007-10-19 2009-04-23 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toys
US20090254217A1 (en) * 2008-04-02 2009-10-08 Irobot Corporation Robotics Systems
US20120116584A1 (en) * 2010-11-04 2012-05-10 Kt Corporation Apparatus and method for providing robot interaction services using interactive behavior model
US20140188276A1 (en) * 2012-12-31 2014-07-03 Microsoft Corporation Mood-actuated device
CN104598913A (en) * 2013-10-30 2015-05-06 广州华久信息科技有限公司 Face-based emotional health promotion method and system
US9104231B2 (en) 2012-09-27 2015-08-11 Microsoft Technology Licensing, Llc Mood-actuated device
US20150231784A1 (en) * 2012-03-23 2015-08-20 Irobot Corporation Robot controller learning system
US20150375129A1 (en) * 2009-05-28 2015-12-31 Anki, Inc. Mobile agents for manipulating, moving, and/or reorienting components
US10265844B2 (en) * 2017-03-24 2019-04-23 International Business Machines Corporation Creating assembly plans based on triggering events
US10513038B2 (en) * 2016-03-16 2019-12-24 Fuji Xerox Co., Ltd. Robot control system
US11285614B2 (en) 2016-07-20 2022-03-29 Groove X, Inc. Autonomously acting robot that understands physical contact
US20220299999A1 (en) * 2021-03-16 2022-09-22 Casio Computer Co., Ltd. Device control apparatus, device control method, and recording medium

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI234105B (en) * 2002-08-30 2005-06-11 Ren-Guang King Pointing device, and scanner, robot, mobile communication device and electronic dictionary using the same
US7421418B2 (en) * 2003-02-19 2008-09-02 Nahava Inc. Method and apparatus for fundamental operations on token sequences: computing similarity, extracting term values, and searching efficiently
US7707135B2 (en) * 2003-03-04 2010-04-27 Kurzweil Technologies, Inc. Enhanced artificial intelligence language
US7534157B2 (en) 2003-12-31 2009-05-19 Ganz System and method for toy adoption and marketing
JP4244812B2 (en) * 2004-01-16 2009-03-25 ソニー株式会社 Action control system and action control method for robot apparatus
KR100590549B1 (en) * 2004-03-12 2006-06-19 삼성전자주식회사 Remote control method for robot using 3-dimensional pointing method and robot control system thereof
TWM258789U (en) * 2004-06-09 2005-03-11 Ming-Hsiang Yeh Interactive toy
KR100595821B1 (en) * 2004-09-20 2006-07-03 한국과학기술원 Emotion synthesis and management for personal robot
US8583282B2 (en) 2005-09-30 2013-11-12 Irobot Corporation Companion robot for personal interaction
ES2270741B1 (en) * 2006-11-06 2008-03-01 Imc. Toys S.A. TOY.
US20080119959A1 (en) * 2006-11-21 2008-05-22 Park Cheonshu Expression of emotions in robot
KR100866212B1 (en) * 2007-02-08 2008-10-30 삼성전자주식회사 Genetic robot platform and genetic robot behavior manifestation method
GB0716459D0 (en) * 2007-08-23 2007-10-03 Funky Moves Ltd Interactive sporting apparatus
EP2367606A4 (en) * 2008-11-27 2012-09-19 Univ Stellenbosch A toy exhibiting bonding behaviour
KR100968944B1 (en) * 2009-12-14 2010-07-14 (주) 아이알로봇 Apparatus and method for synchronizing robot
FR2962048A1 (en) * 2010-07-02 2012-01-06 Aldebaran Robotics S A HUMANOID ROBOT PLAYER, METHOD AND SYSTEM FOR USING THE SAME
US8483873B2 (en) * 2010-07-20 2013-07-09 Innvo Labs Limited Autonomous robotic life form
US9186575B1 (en) * 2011-03-16 2015-11-17 Zynga Inc. Online game with animal-breeding mechanic
US20130078886A1 (en) * 2011-09-28 2013-03-28 Helena Wisniewski Interactive Toy with Object Recognition
US20130268119A1 (en) * 2011-10-28 2013-10-10 Tovbot Smartphone and internet service enabled robot systems and methods
EP3102366B1 (en) * 2014-02-05 2018-06-27 ABB Schweiz AG A system and method for creating and editing a playlist defining motions of a plurality of robots cooperatively performing a show
KR20150100165A (en) * 2014-02-24 2015-09-02 주식회사 로보빌더 Joining apparatus of modular actuator
CN104881108B (en) * 2014-02-27 2018-08-31 青岛海尔机器人有限公司 A kind of intelligent human-machine interaction method and device
EP2933067B1 (en) * 2014-04-17 2019-09-18 Softbank Robotics Europe Method of performing multi-modal dialogue between a humanoid robot and user, computer program product and humanoid robot for implementing said method
CN106926236B (en) * 2015-12-31 2020-06-30 深圳光启合众科技有限公司 Method and device for acquiring state of robot
CN108885436B (en) 2016-01-15 2021-12-14 美国iRobot公司 Autonomous monitoring robot system
US10101739B2 (en) 2016-03-21 2018-10-16 Sphero, Inc. Multi-body self propelled device with induction interface power transfer
US20170282383A1 (en) * 2016-04-04 2017-10-05 Sphero, Inc. System for content recognition and response action
CN107590503A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 A kind of robot affection data update method and system
JP6475872B2 (en) * 2016-07-11 2019-02-27 Groove X株式会社 Autonomous behavior robot with controlled amount of activity
CN106503043B (en) * 2016-09-21 2019-11-08 北京光年无限科技有限公司 A kind of interaction data processing method for intelligent robot
CN107962571B (en) * 2016-10-18 2021-11-02 江苏网智无人机研究院有限公司 Target object control method, device, robot and system
US10100968B1 (en) 2017-06-12 2018-10-16 Irobot Corporation Mast systems for autonomous mobile robots
CN107544248B (en) 2017-09-13 2019-12-13 上海思岚科技有限公司 Task optimization method and device in mobile robot
CN109968352B (en) * 2017-12-28 2021-06-04 深圳市优必选科技有限公司 Robot control method, robot and device with storage function
US11633863B2 (en) * 2018-04-06 2023-04-25 Digital Dream Labs, Llc Condition-based robot audio techniques
CN109086863A (en) * 2018-09-02 2018-12-25 重庆市南岸区瑜目网络科技有限责任公司 A method of allow artificial intelligence robot that there is mankind's individual character
KR102228866B1 (en) * 2018-10-18 2021-03-17 엘지전자 주식회사 Robot and method for controlling thereof
CN111290570A (en) * 2018-12-10 2020-06-16 中国移动通信集团终端有限公司 Control method, device, equipment and medium for artificial intelligence equipment
US11110595B2 (en) 2018-12-11 2021-09-07 Irobot Corporation Mast systems for autonomous mobile robots
CN111496802A (en) * 2019-01-31 2020-08-07 中国移动通信集团终端有限公司 Control method, device, equipment and medium for artificial intelligence equipment
US10682575B1 (en) 2019-10-03 2020-06-16 Mythical, Inc. Systems and methods for generating in-game assets for a gaming platform based on inheriting characteristics from other in-game assets
US11389735B2 (en) * 2019-10-23 2022-07-19 Ganz Virtual pet system
JP7070529B2 (en) * 2019-10-31 2022-05-18 カシオ計算機株式会社 Equipment control device, equipment control method and program
US11957991B2 (en) * 2020-03-06 2024-04-16 Moose Creative Management Pty Limited Balloon toy
JP2021181141A (en) * 2020-05-20 2021-11-25 セイコーエプソン株式会社 Charging method and charging system
US11358059B2 (en) 2020-05-27 2022-06-14 Ganz Live toy system
US11192034B1 (en) 2020-07-08 2021-12-07 Mythical, Inc. Systems and methods for determining how much of a created character is inherited from other characters
KR102295836B1 (en) * 2020-11-20 2021-08-31 오로라월드 주식회사 Apparatus And System for Growth Type Smart Toy
JP7431149B2 (en) * 2020-12-17 2024-02-14 トヨタ自動車株式会社 mobile system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572646A (en) * 1993-08-25 1996-11-05 Casio Computer Co., Ltd. Apparatus for displaying images of living things to show growing and/or moving of the living things
US5870527A (en) * 1995-10-17 1999-02-09 Sony Corporation Robot control methods and apparatus
US5966526A (en) * 1997-03-18 1999-10-12 Kabushiki Kaisha Bandai Simulation device for fostering a virtual creature
US6038493A (en) * 1996-09-26 2000-03-14 Interval Research Corporation Affect-based robot communication methods and systems
US6253167B1 (en) * 1997-05-27 2001-06-26 Sony Corporation Client apparatus, image display controlling method, shared virtual space providing apparatus and method, and program providing medium
US20010007825A1 (en) * 1997-10-03 2001-07-12 Nintendo Co., Ltd. Pedometer with game mode
US20020016128A1 (en) * 2000-07-04 2002-02-07 Tomy Company, Ltd. Interactive toy, reaction behavior pattern generating device, and reaction behavior pattern generating method
US6353814B1 (en) * 1997-10-08 2002-03-05 Michigan State University Developmental learning machine and method
US6385506B1 (en) * 1999-03-24 2002-05-07 Sony Corporation Robot
US20020098879A1 (en) * 2001-01-19 2002-07-25 Rheey Jin Sung Intelligent pet robot
US6445978B1 (en) * 1999-05-10 2002-09-03 Sony Corporation Robot device and method for controlling the same
US6584376B1 (en) * 1999-08-31 2003-06-24 Swisscom Ltd. Mobile robot and method for controlling a mobile robot
US6595858B1 (en) * 1999-08-26 2003-07-22 Nintendo Co., Ltd. Image-display game system
US6656049B1 (en) * 1998-02-27 2003-12-02 Kabushiki Kaisah Sega Enterprises Electronic game apparatus
US6772121B1 (en) * 1999-03-05 2004-08-03 Namco, Ltd. Virtual pet device and control program recording medium therefor

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3018865B2 (en) 1993-10-07 2000-03-13 富士ゼロックス株式会社 Emotion expression device
JPH10235019A (en) * 1997-02-27 1998-09-08 Sony Corp Portable life game device and its data management device
JPH11126017A (en) 1997-08-22 1999-05-11 Sony Corp Storage medium, robot, information processing device and electronic pet system
JPH1165417A (en) * 1997-08-27 1999-03-05 Omron Corp Device and method for virtual pet raising and program record medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5572646A (en) * 1993-08-25 1996-11-05 Casio Computer Co., Ltd. Apparatus for displaying images of living things to show growing and/or moving of the living things
US5870527A (en) * 1995-10-17 1999-02-09 Sony Corporation Robot control methods and apparatus
US6038493A (en) * 1996-09-26 2000-03-14 Interval Research Corporation Affect-based robot communication methods and systems
US5966526A (en) * 1997-03-18 1999-10-12 Kabushiki Kaisha Bandai Simulation device for fostering a virtual creature
US6253167B1 (en) * 1997-05-27 2001-06-26 Sony Corporation Client apparatus, image display controlling method, shared virtual space providing apparatus and method, and program providing medium
US20010007825A1 (en) * 1997-10-03 2001-07-12 Nintendo Co., Ltd. Pedometer with game mode
US6302789B2 (en) * 1997-10-03 2001-10-16 Nintendo Co., Ltd. Pedometer with game mode
US6353814B1 (en) * 1997-10-08 2002-03-05 Michigan State University Developmental learning machine and method
US6656049B1 (en) * 1998-02-27 2003-12-02 Kabushiki Kaisah Sega Enterprises Electronic game apparatus
US6772121B1 (en) * 1999-03-05 2004-08-03 Namco, Ltd. Virtual pet device and control program recording medium therefor
US6385506B1 (en) * 1999-03-24 2002-05-07 Sony Corporation Robot
US6445978B1 (en) * 1999-05-10 2002-09-03 Sony Corporation Robot device and method for controlling the same
US6595858B1 (en) * 1999-08-26 2003-07-22 Nintendo Co., Ltd. Image-display game system
US6584376B1 (en) * 1999-08-31 2003-06-24 Swisscom Ltd. Mobile robot and method for controlling a mobile robot
US20020016128A1 (en) * 2000-07-04 2002-02-07 Tomy Company, Ltd. Interactive toy, reaction behavior pattern generating device, and reaction behavior pattern generating method
US20020098879A1 (en) * 2001-01-19 2002-07-25 Rheey Jin Sung Intelligent pet robot

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8082216B2 (en) * 2007-06-11 2011-12-20 Sony Corporation Information processing apparatus, method and program having a historical user functionality adjustment
US20080319930A1 (en) * 2007-06-11 2008-12-25 Sony Corporation Information processing apparatus and method, and program
US20090104844A1 (en) * 2007-10-19 2009-04-23 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toys
US7988522B2 (en) * 2007-10-19 2011-08-02 Hon Hai Precision Industry Co., Ltd. Electronic dinosaur toy
US20090254217A1 (en) * 2008-04-02 2009-10-08 Irobot Corporation Robotics Systems
US8452448B2 (en) * 2008-04-02 2013-05-28 Irobot Corporation Robotics systems
US20150375129A1 (en) * 2009-05-28 2015-12-31 Anki, Inc. Mobile agents for manipulating, moving, and/or reorienting components
US11027213B2 (en) 2009-05-28 2021-06-08 Digital Dream Labs, Llc Mobile agents for manipulating, moving, and/or reorienting components
US9919232B2 (en) * 2009-05-28 2018-03-20 Anki, Inc. Mobile agents for manipulating, moving, and/or reorienting components
US20120116584A1 (en) * 2010-11-04 2012-05-10 Kt Corporation Apparatus and method for providing robot interaction services using interactive behavior model
US8644990B2 (en) * 2010-11-04 2014-02-04 Kt Corporation Apparatus and method for providing robot interaction services using interactive behavior model
US20150231784A1 (en) * 2012-03-23 2015-08-20 Irobot Corporation Robot controller learning system
US9135554B2 (en) * 2012-03-23 2015-09-15 Irobot Corporation Robot controller learning system
US9104231B2 (en) 2012-09-27 2015-08-11 Microsoft Technology Licensing, Llc Mood-actuated device
US20140188276A1 (en) * 2012-12-31 2014-07-03 Microsoft Corporation Mood-actuated device
US9046884B2 (en) * 2012-12-31 2015-06-02 Microsoft Technology Licensing, Llc Mood-actuated device
CN104598913A (en) * 2013-10-30 2015-05-06 广州华久信息科技有限公司 Face-based emotional health promotion method and system
US10513038B2 (en) * 2016-03-16 2019-12-24 Fuji Xerox Co., Ltd. Robot control system
US11285614B2 (en) 2016-07-20 2022-03-29 Groove X, Inc. Autonomously acting robot that understands physical contact
US10265844B2 (en) * 2017-03-24 2019-04-23 International Business Machines Corporation Creating assembly plans based on triggering events
US10543595B2 (en) * 2017-03-24 2020-01-28 International Business Machines Corporation Creating assembly plans based on triggering events
US10532456B2 (en) * 2017-03-24 2020-01-14 International Business Machines Corporation Creating assembly plans based on triggering events
US20220299999A1 (en) * 2021-03-16 2022-09-22 Casio Computer Co., Ltd. Device control apparatus, device control method, and recording medium

Also Published As

Publication number Publication date
US7117190B2 (en) 2006-10-03
CN1151016C (en) 2004-05-26
US20030045203A1 (en) 2003-03-06
WO2001039932A1 (en) 2001-06-07
CN1338980A (en) 2002-03-06
KR20010101883A (en) 2001-11-15

Similar Documents

Publication Publication Date Title
US7117190B2 (en) Robot apparatus, control method thereof, and method for judging character of robot apparatus
US6445978B1 (en) Robot device and method for controlling the same
KR100864339B1 (en) Robot device and behavior control method for robot device
US6505098B1 (en) Robot system, robot device, and its cover
US6650965B2 (en) Robot apparatus and behavior deciding method
US6362589B1 (en) Robot apparatus
JP2003039363A (en) Robot device, action learning method therefor, action learning program thereof, and program recording medium
JP2011115944A (en) Robot device, robot device action control method, and program
JP2005193331A (en) Robot device and its emotional expression method
US7063591B2 (en) Edit device, edit method, and recorded medium
US6711467B2 (en) Robot apparatus and its control method
JP2006110707A (en) Robot device
JP4296736B2 (en) Robot device
US20030056252A1 (en) Robot apparatus, information display system, and information display method
JP2002205289A (en) Action control method for robot device, program, recording medium and robot device
JP2001157981A (en) Robot device and control method thereof
JP2001157980A (en) Robot device, and control method thereof
JP2001157979A (en) Robot device, and control method thereof
JP2001157982A (en) Robot device and control method thereof
JP2001154707A (en) Robot device and its controlling method
JP2001157977A (en) Robot device, and control method thereof
JP2002269530A (en) Robot, behavior control method of the robot, program and storage medium
JP2002120182A (en) Robot device and control method for it
JP2005193330A (en) Robot device and its emotional expression method
JP2001157978A (en) Robot device, and control method thereof

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION