US20010024213A1 - User interface apparatus and operation range presenting method - Google Patents

User interface apparatus and operation range presenting method Download PDF

Info

Publication number
US20010024213A1
US20010024213A1 US09/860,496 US86049601A US2001024213A1 US 20010024213 A1 US20010024213 A1 US 20010024213A1 US 86049601 A US86049601 A US 86049601A US 2001024213 A1 US2001024213 A1 US 2001024213A1
Authority
US
United States
Prior art keywords
image processing
proper range
image
input
cursor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/860,496
Inventor
Miwako Doi
Akira Morishita
Naoko Umeki
Shunichi Numazaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP00977397A external-priority patent/JP3819096B2/en
Priority claimed from JP00949697A external-priority patent/JP3588527B2/en
Application filed by Individual filed Critical Individual
Priority to US09/860,496 priority Critical patent/US20010024213A1/en
Publication of US20010024213A1 publication Critical patent/US20010024213A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04801Cursor retrieval aid, i.e. visual aspect modification, blinking, colour changes, enlargement or other visual cues, for helping user do find the cursor in graphical user interfaces

Definitions

  • the present invention relates to a user interface apparatus and an input method of performing input by image processing.
  • a mouse is overwhelmingly used as a computer input device.
  • operations performable by using a mouse are, e.g., cursor movement and menu selection, so a mouse is merely a two-dimensional pointing device.
  • information which can be handled by a mouse is two-dimensional information, it is difficult to select an object with a depth, e.g., an object in a three-dimensional space.
  • an input device such as a mouse to add natural motions to characters.
  • the motion of the hand of a user is videotaped, and processes similar to those described above are performed by analyzing the video image.
  • a light-receiving device for detecting an object is fixedly installed. This limits the range within which the hand of a user or the like can be correctly detected. Accordingly, depending on the position of the hand of a user or the like, the shape or the motion of the hand or the like cannot be accurately detected. The result is the inability to realize control or the like desired by the user. Additionally, it is difficult for the user to immediately recognize the above-mentioned detectable range in a three-dimensional space. Therefore, the user must learn operations in the detectable range from experience. This also results in an additional operation load on the user.
  • a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching a mode for performing pointing and other modes on the basis of a result of the image processing of the input image.
  • a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching at least a cursor move mode, a select mode, and a double click mode on the basis of a result of the image processing of the input image.
  • the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, wherein the image processing of the input image is performed for a selected object in accordance with a recognition method designated for the object.
  • a recognition method recognition engine
  • the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, and means for presenting, near a displayed object indicated by a cursor, information indicating a recognition method designated for the object.
  • a recognition method recognition engine
  • the apparatus further comprises means for presenting the result of the image processing of the input image in a predetermined shape on a cursor.
  • a user interface apparatus comprises a first device for inputting a reflected image, and a second device for performing input by image processing of an input image, wherein the second device comprises means for designating a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device, and the first device comprises means for performing predetermined image processing on the basis of the designated recognition method, and means for sending back the input image and a result of the image processing to the second device.
  • the second device comprises means for designating a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device
  • the first device comprises means for performing predetermined image processing on the basis of the designated recognition method, and means for sending back the input image and a result of the image processing to the second device.
  • the first device may further comprise means for requesting the second device to transfer information necessary for image processing suited to a necessary recognition method, if the first device does not have image processing means (recognition engine) suited to the recognition method, and the second device may further comprise means for transferring the requested information to the first device.
  • each of the first and second devices may further comprise means for requesting, when information necessary for image processing suited to a predetermined recognition method in the device is activated first, the other device to deactivate identical information, and means for deactivating information necessary for image processing suited to a predetermined recognition method when requested to deactivate the information by the other device.
  • an instruction input method comprises the steps of performing image processing for an input image of an object, and switching a mode for performing pointing and other modes on the basis of a result of the image processing.
  • a recognition method recognition engine
  • the present invention obviates the need for an explicit operation performed by a user to switch modes such as a cursor move mode, a select mode, and a double click mode.
  • the present invention eliminates the need for calibration done by an operation by a user because a point designated by the user is read by recognition processing and reflected on, e.g., cursor movement on the screen.
  • the input accuracy and the user operability can be expected to be improved by the use of image processing means (recognition engine) suited to a necessary recognition method.
  • the operation state can be fed back to a user by superposing a semitransparent input image on a cursor.
  • the present invention can provide a user interface apparatus which reduces the operation load on a user and is easier to use.
  • recognition processing is performed to some extent in the first device (on the device side). Therefore, it is possible to distribute the load and increase the speed of the recognition processing as a whole.
  • the function of a device having an image input function can be improved.
  • each of the above inventions can be established as a mechanically readable medium recording programs for allowing a computer to execute a corresponding procedure or means.
  • a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible, and means for presenting at least one of predetermined visual information and audio information, if it is determined that the object is outside the proper range.
  • a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object falls outside the proper range, and means for informing a user of a direction in which the object deviates from the proper range by changing a display state of a cursor displayed on a display screen into a predetermined state, if it is determined that the object is outside the proper range.
  • the display state of a cursor displayed on the display screen is changed into a predetermined state to inform the user of a direction in which the object deviates from the proper range. Therefore, the user can visually recognize the direction in which the object deviates from the proper range and can also easily and immediately correct the position of the object. Consequently, the user can easily recognize the proper range in a three-dimensional space and input a desired instruction or the like by performing gesture in the proper range.
  • the cursor is made smaller and/or lighter in color if it is determined that the object is farther than the proper range.
  • a left side of the cursor is deformed if it is determined that the object falls outside the proper range to the left.
  • a right side of the cursor is deformed if it is determined that the object falls outside the proper range to the right.
  • the apparatus further comprises means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
  • a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range, and means for informing a user of a direction in which the object deviates from the proper range by sound, if it is determined that the object is outside the proper range.
  • each of the above inventions can be established as a mechanically readable medium recording programs for allowing a computer to execute a corresponding procedure or means.
  • FIG. 2 is a block diagram showing an example of the arrangement of an image input unit
  • FIG. 3 is a view for explaining the relationship between a display device, the housing of the image input unit, and an object;
  • FIG. 5 is a view showing an example of an input image which indicates gesture for cursor control
  • FIGS. 6A and 6B are views showing examples of screen displays
  • FIGS. 7A and 7B are views showing examples of screen displays
  • FIGS. 8A and 8B are views showing examples of screen displays
  • FIG. 9 is a view showing an example of an input image which indicates gesture for selection
  • FIGS. 10A and 10B are views showing examples of screen displays
  • FIGS. 11A and 11B are views showing examples of screen displays
  • FIG. 12 is a view showing an example of an input image which indicates gesture for double click
  • FIGS. 13A and 13B are views showing examples of screen displays
  • FIG. 14 is a view showing an example of a screen display
  • FIGS. 15A and 15B are views showing examples of screen displays
  • FIGS. 16A and 16B are views showing examples of screen displays
  • FIGS. 17A and 17B are views showing examples of screen displays
  • FIGS. 18A and 18B are views showing examples of screen displays
  • FIG. 19 is a view for explaining processing corresponding to a designated recognition engine
  • FIG. 20 is a view showing examples of descriptions of recognition engines for different objects
  • FIG. 21 is a block diagram showing an example of the arrangement of an interface apparatus according to the second embodiment of the present invention.
  • FIG. 22 is a view showing an example of a description in an active list storage unit when a vertical slider bar is selected
  • FIG. 23 is a block diagram showing another example of the arrangement of the interface apparatus according to the second embodiment of the present invention.
  • FIG. 24 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment
  • FIG. 25 is a block diagram showing still another example of the arrangement of the interface apparatus according to the second embodiment of the present invention.
  • FIG. 26 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment
  • FIG. 27 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment
  • FIG. 28 is a block diagram showing an example of the arrangement of a user interface apparatus according to still another embodiment of the present invention.
  • FIG. 29 is a flow chart showing an example of the process procedure of this embodiment.
  • FIGS. 30A and 30B are views showing examples of a dot matrix and a screen display, respectively, when an object is in a proper range;
  • FIG. 31 is a flow chart showing an example of the evaluation procedure of checking whether an object is in a proper range
  • FIG. 32 is a view for explaining a length *liter* of a vertical line in the image shape of an object
  • FIGS. 34A and 34B are views showing examples of a dot matrix and a screen display, respectively, when an object is too close;
  • FIGS. 35A and 35B are views showing examples of a dot matrix and a screen display, respectively, when an object is too far;
  • FIGS. 36A and 36B are views showing examples of a dot matrix and a screen display, respectively, when an object protrudes to the left;
  • FIGS. 37A and 37B are views showing examples of a dot matrix and a screen display, respectively, when an object protrudes to the right;
  • FIG. 38 is a flow chart showing another example of the procedure of reflecting the evaluation result.
  • FIG. 1 is a block diagram showing the arrangement of a user interface apparatus according to the first embodiment of the present invention.
  • This user interface apparatus is suitably applicable to, e.g., a computer having a graphical user interface. That is, this apparatus is a system in which a cursor, a slider bar, a scroll bar, a pull-down menu, a box, a link, and icons of applications are displayed on the display screen, and a user inputs an instruction for moving a cursor, selecting an icon, or starting an application by using an input device.
  • the input device receives inputs by performing image processing for an object such as the hand of a user without requiring any dedicated device such as a mouse.
  • This user interface apparatus includes an image input unit 10 , an image storage unit 11 , a shape interpreting unit 12 , an interpretation rule storage unit 13 , a presenting unit 14 , and a cursor switching unit 15 .
  • FIG. 2 shows an example of the arrangement of the image input unit 10 .
  • the image input unit 10 includes a light-emitting unit 101 , a reflected light extracting unit 102 , and a timing controller 103 .
  • the light-emitting unit 101 irradiates light such as near infrared rays onto an object by using light-emitting elements such as LEDs.
  • the reflected light extracting unit 102 receives the reflected light from the object by using light-receiving elements arranged in the form of a two-dimensional array.
  • the timing controller 103 controls the operation timings of the light-emitting unit 101 and the reflected light extracting unit 102 .
  • the difference between the amount of light received by the reflected light extracting unit 102 when the light-emitting unit 101 emits light and the amount of light received by the reflected light extracting unit 102 when the light-emitting unit 101 does not emit light is calculated to correct the background, thereby extracting only a component of the light emitted from the light-emitting unit 101 and reflected by the object.
  • the image input unit 10 need not have any light-emitting unit, i.e., can have only a light-receiving unit such as a CCD camera.
  • FIG. 3 shows the relationship between a display device 20 , a housing 8 of the image input unit 10 , and an object 22 .
  • an image of the reflected light from the hand is obtained.
  • Each pixel value of the reflected light image is affected by the property of the object (e.g., whether the object mirror-reflects, scatters, or absorbs light), the direction of the object surface, the distance to the object, and the like factor. However, if a whole object uniformly scatters light, the amount of the reflected light has a close relationship to the distance to the object.
  • the reflected light image when a user puts his or her hand before the image input unit reflects the distance to the hand, the inclination of the hand (the distance changes from one portion to another), and the like. Therefore, various pieces of information can be input and generated by extracting these pieces of information.
  • the image storage unit 11 sequentially stores two-dimensional images of an object of image detection, which are output at predetermined time intervals (e.g., ⁇ fraction (1/30) ⁇ , ⁇ fraction (1/60) ⁇ , or ⁇ fraction (1/100) ⁇ sec) from the image input unit 10 .
  • predetermined time intervals e.g., ⁇ fraction (1/30) ⁇ , ⁇ fraction (1/60) ⁇ , or ⁇ fraction (1/100) ⁇ sec
  • the shape interpreting unit 12 extracts a predetermined feature amount from a dot matrix and interprets the shape on the basis of the interpretation rules stored in the interpretation rule storage unit 13 .
  • the shape interpreting unit 12 outputs an instruction corresponding to a suitable interpretation rule as a interpretation result. If there is no suitable interpretation rule, it is also possible, where necessary, to change the way a predetermined feature amount is extracted from a dot matrix (e.g., change a threshold value when dot matrix threshold processing is performed) and again perform the matching processing. If no suitable interpretation rule is finally found, it is determined that there is no input.
  • the interpretation rule storage unit 13 stores interpretation rules for shape interpretation. For example, predetermined contents such as feature amounts, e.g., the shape, area, uppermost point, and the center of gravity of an object such as the hand of a user in a dot matrix and designation contents corresponding to these predetermined contents are stored as interpretation rules.
  • the designation contents include, e.g., selection of an icon, start of an application, and movement of a cursor. When cursor movement is to be performed, the moving amount of a cursor corresponding to the direction and the distance of the movement of the hand is also designated. For example, the following rules are possible.
  • a state in which the thumb and the index finger are open and raised is used to indicate cursor movement (in this case, the distance and the direction of the movement of the tip of the index finger correspond to the distance and the direction of the movement of the cursor).
  • a state in which the thumb and the index finger are closed and raised is used to indicate selection of an icon in a position where the cursor exists.
  • a state in which the thumb and the index finger are raised and the palm of the hand is turned from that in the cursor movement is used to indicate start of an application corresponding to an icon in a position where the cursor exists.
  • Select recognition engine select engine
  • direction forward direction ⁇ a direction in which the thumb of the right hand is positioned on the leftmost side
  • thumb stretched ⁇ all joints of the finger are stretched
  • index finger stretched ⁇ all joints of the finger are stretched
  • middle finger bent ⁇ all joints of the finger are bent
  • rotational angle from immediately preceding selected engine 180° ⁇ a condition is that a rotational angle from a select engine is 180°
  • direction reverse direction ⁇ a direction in which the thumb of the right hand is positioned on the rightmost side
  • thumb stretched ⁇ all joints of the finger are stretched
  • index finger stretched ⁇ all joints of the finger are stretched
  • middle finger bent ⁇ all joints of the finger are bent
  • ring finger bent ⁇ all joints of the finger are bent
  • Representative examples of the extraction of a feature amount from a dot matrix in the shape interpretation by the shape interpreting unit 12 are distance information extraction and region extraction. If an object has a uniform homogeneous scattering surface, the reflected light image can be regarded as a distance image. Accordingly, the three-dimensional shape of the object can be extracted from the light-receiving unit. If the object is a hand, an inclination of the palm of the hand, for example, can be detected. The inclination of the palm of the hand appears as the difference between partial distances. If pixel values change when the hand is moved, it can be considered that the distance changes. Also, almost no light is reflected from a far object such as a background.
  • the shape of an object can be easily cut out. If the object is a hand, for example, it is very easy to cut out the silhouette image of the hand. Even when a distance image is used, a general approach is to once perform region extraction by using a threshold value and then use distance information in the region.
  • Various methods are usable as a method of matching a feature amount extracted from a dot matrix with the interpretation rules. Examples are vector formation by which a vector is extracted from an image, extraction of a shape deformed state based on a shape model, and spectral analysis based on a distance value on a scan line.
  • the matching processing can be reexecuted by changing the threshold value or the like. If no suitable shape is finally found, it is determined that there is no input.
  • the shape interpreting unit 12 determines that an instruction is for starting the function of an application or an OS, the corresponding software is started.
  • the presenting unit 14 performs presentation reflecting the interpretation result from the shape interpreting unit 12 on the display device. For example, the movement of a cursor and, where necessary, messages are presented.
  • the cursor switching unit 15 controls cursor switching on the basis of the interpretation result from the shape interpreting unit 12 .
  • FIGS. 4A through 4H show an example of the operation procedure of the user interface apparatus of this embodiment.
  • a cursor control state C is initialized (C ⁇ 0)
  • a selected state S is initialized (S ⁇ 0)
  • cursor information I is initialized (I ⁇ 0)
  • a recognition engine flag R is initialized (R ⁇ 0) (step S 1 ).
  • a reflected image is written in the image storage unit 11 (step S 2 ).
  • a dot matrix is loaded to the shape interpreting unit 12 (step S 3 ).
  • the shape interpreting unit 12 checks a mode indicated by gesture from a feature amount extracted from the dot matrix and the interpretation rules (step S 4 ).
  • the gesture indicating cursor control is detected when, for example, the shape of a hand shown in FIG. 5 is recognized. That is, the gesture is detected when the thumb and the index finger of the right hand are open and raised upward.
  • the gesture indicating selection is detected when, for example, the shape of a hand shown in FIG. 9 is recognized. That is, the gesture is detected when the thumb and the index finger of the right hand are closed and raised upward.
  • a recognition engine is for extracting a predetermined feature amount from a dot matrix as will be described in detail later, and various recognition engines are usable.
  • One example is an uppermost point vertical direction engine which extracts the vertical moving amount of the uppermost point of an object shape in a dot matrix.
  • the gesture indicating double click is detected when, e.g., the shape of a hand shown in FIG. 12 is recognized. That is, the gesture is detected when the thumb and the index finger of the left hand are open and raised upward.
  • step S 30 another recognition processing is performed (step S 30 ), and the flow returns to step S 2 .
  • cursor movement is designated when the shape of a hand in which the thumb and the index finger are stretched as shown in FIG. 5 is recognized.
  • a cursor 201 moves on the display screen accordingly. If this is the case, the amount and direction of the movement in a fixed point in a dot matrix of the user's hand shape, e.g., an uppermost point (e.g., the tip of the index finger) which is in the uppermost position in the vertical direction in the image or a nearest point (e.g., a point with the highest gradation level) which is nearest to the light-receiving unit in the image are extracted.
  • an uppermost point e.g., the tip of the index finger
  • a nearest point e.g., a point with the highest gradation level
  • the shape of the cursor is so changed as to present the function of the object.
  • An arrow 204 in FIG. 7A indicates that an operation of the slider bar as an object to be operated is limited in the vertical direction. That is, a vertical direction engine is designated for the slider bar. If this is the case, only vertical movement is recognized regardless of how the user moves his or her hand.
  • FIG. 10A For example, if the shape of the hand in FIG. 9 is recognized in a state as shown in FIG. 10A, a slider bar is selected.
  • An arrow 208 in FIG. 10A indicates that an operation of the slider bar as an object to be operated is limited in the vertical direction.
  • a semitransparent input image 209 can also be displayed on the cursor as shown in FIG. 10B.
  • FIG. 13A for example, the cursor is moved onto an icon “INTERNET” with the shape of the hand shown in FIG. 5, and subsequently the hand is turned almost 180° as shown in FIG. 12. Consequently, double click of “INTERNET” is accepted.
  • FIG. 13B shows a state immediately before the selected icon is open. Note that a semitransparent input image 212 can also be displayed on the cursor as shown in FIG. 14.
  • the cursor is moved to “FILE” as shown in FIG. 15B.
  • a semitransparent input image 213 can also be displayed on the cursor as shown in FIG. 15B.
  • a pull-down menu is displayed as shown in FIG. 16A.
  • An arrow 214 indicates that an operation of the pull-down menu as an object to be operated is limited in the vertical direction.
  • FIG. 19 is a flow chart of processing concerning a recognition engine.
  • a recognition engine is designated for each object where necessary.
  • this designation can be performed by programming by a programmer who forms an application program by using the present invention.
  • a recognition engine extracts a predetermined feature amount from a dot matrix. That is, if there is a recognition engine designated by a selected object (step S 31 ), the shape interpreting unit 12 extracts a feature amount from a dot matrix in accordance with the designated recognition engine. If there is no designated recognition engine (step S 31 ), the shape interpreting unit 12 executes normal recognition (step S 32 ).
  • Various recognition engines are usable. Examples are a nearest point vertical direction engine 121 for extracting a vertical moving amount of a nearest point of an object shape in a dot matrix, a nearest point horizontal direction engine 122 for extracting a horizontal moving amount of the nearest point, a nearest point oblique direction engine 123 for extracting an oblique moving amount of the nearest point, a barycentric point vertical direction engine 124 for extracting a vertical moving amount of a barycentric point, a barycentric point horizontal direction engine 125 for extracting a horizontal moving amount of the barycentric point, a barycentric point oblique direction engine 126 for extracting an oblique moving amount of the barycentric point, an uppermost point vertical direction engine 127 for extracting a vertical moving amount of an uppermost point, an uppermost point horizontal direction engine 128 for extracting a horizontal moving amount of the uppermost point, an uppermost point oblique direction engine 129 for extracting an oblique moving amount of the uppermost point, an uppermost point
  • FIG. 20 shows examples of descriptions of recognition engines for different objects.
  • This embodiment as described above obviates the need for an explicit operation performed by a user to switch modes such as a cursor move mode, a select mode, and a double click mode.
  • the embodiment eliminates the need for calibration done by an operation by a user because a point designated by the user is read by recognition processing and reflected on, e.g., cursor movement on the screen.
  • the input accuracy and the user operability can be expected to be improved by the use of a recognition engine.
  • the operation state can be fed back to a user by superposing a semitransparent input image on a cursor.
  • this embodiment can provide a user interface apparatus which reduces the operation load on a user and is easier to use.
  • the second embodiment is basically the same as the first embodiment except that a part of recognition processing is performed inside an image input unit (to be referred to as a device side hereinafter) and an image input unit 10 transfers a dot matrix of an input image and a predetermined recognition result to the main body.
  • the recognition processing performed on the device side is desirably light-load processing.
  • FIG. 21 shows an example of the arrangement of an interface apparatus according to this embodiment.
  • the main body incorporates a main body controller 32 , a presenting unit 14 , and a cursor switching unit 15
  • the device side incorporates the image input unit 10 , an image storage unit 11 , a recognition engine controller 30 , an active list 31 , and several predetermined recognition engines 121 , 122 , 142 , 143 , and 144 .
  • the main body controller 32 corresponds to the shape interpreting unit 12 and the interpretation rule storage unit 13 (including the recognition engines) shown in FIG. 1. However, the main body controller 32 can have another arrangement, perform another recognition processing, or use recognition engines.
  • FIG. 22 shows an example of a description in the active list storage unit when a vertical slider bar is selected.
  • FIG. 22 shows that the cursor engine 142 and the nearest point vertical direction engine 121 are designated.
  • the main body side sends a list of recognition engines to be activated or a list of recognition engines to be deactivated to the device side.
  • this list is stored in the active list storage unit 31 .
  • the recognition engine controller 30 extracts a predetermined feature amount as a recognition result from an input image in accordance with a designated recognition engine and sends back the input image and the recognition result to the main body side.
  • the device side performs recognition processing to a certain degree. Consequently, it is possible to distribute the load and improve the speed of the recognition processing as a whole.
  • the function of a device having an image input function can be improved.
  • FIG. 23 shows another configuration of this embodiment in which a recognition engine storage unit 33 is added to the configuration in FIG. 21.
  • FIG. 24 shows the flow of processing in FIG. 23. This flow of processing will be described below with reference to FIG. 24.
  • the main body sends an active list (or an inactive list) which is a list of recognition engines to be activated (or deactivated) on the device side. All recognition engines contained in the active list (or the inactive list) exist on the device side.
  • recognition engines used on the device side are few, all of these recognition engines can be mounted on the device side. However, if the number of types of recognition engines is increased, all recognition engines are not thoroughly used even if all these recognition engines are mounted. This results in a decreased economical efficiency.
  • step S 33 of FIG. 24 the main body sends the active list to the device side.
  • the recognition engine controller 30 on the device side checks whether all recognition engines described in the active list exist on the device side (step S 34 ). If all recognition engines exist, the recognition engine controller 30 executes recognition by using these recognition engines (step S 38 ).
  • step S 34 the controller 30 sends a transfer request for the corresponding recognition engine to the main body (step S 35 ).
  • the main body controller 32 reads out the corresponding recognition engine from the recognition engine storage unit 33 and transfers the readout engine to the device side.
  • the recognition engine controller 30 on the device side receives the recognition engine (step S 36 ).
  • the received recognition engine is written in the recognition engine controller 30 .
  • the recognition engine controller 30 rewrites the active list stored in the active list storage unit 31 with the active list transferred from the main body (step S 37 ). Thereafter, the recognition engine controller 30 executes recognition (step S 38 ).
  • a recognition engine to be transferred is a copy of a recognition engine stored in the recognition engine storage unit 33 .
  • FIG. 25 shows still another configuration of the interface apparatus according to this embodiment.
  • the main body shown in FIG. 25 is substantially the same as that shown in FIG. 23 except that, if recognition engines exist in a plurality of locations such as the main body side and the device side, the side having a recognition engine which is activated first requests the other side having an identical recognition engine to deactivate that recognition engine.
  • FIG. 26 shows an example of the operation procedure of this active request.
  • FIG. 27 shows an example of the operation procedure of active request reception.
  • step S 39 image input is performed on the device side (step S 39 ), and recognition is executed on the main body side and/or the device side (step S 40 ).
  • the side which executes recognition transfers the recognition result, an image matrix, and an active list (or an inactive list) to the other side (step S 41 ).
  • the receiving side receives the recognition result, the image matrix, and the active list (or the inactive list) (step S 42 ), and rewrites ON recognition engines in the received active list (or OFF recognition engines in the received inactive list) with OFF recognition engines in the stored active list (step S 43 ).
  • the receiving side then executes another processing where necessary (step S 44 ).
  • each of the above functions can also be realized by software. Furthermore, these functions can be practiced as mechanically readable media recording programs for allowing a computer to execute the procedures or the means described above.
  • FIGS. 28 through 38 show still another embodiment of the present invention. This embodiment checks whether an object of image processing is in a proper range within which the image processing is possible.
  • FIG. 28 is a block diagram showing the arrangement of a user interface apparatus according to this embodiment of the present invention.
  • FIG. 29 shows an example of the operation procedure of the user interface apparatus of this embodiment.
  • This user interface apparatus is suitably applicable to, e.g., a computer having a graphical user interface. That is, this apparatus is a system in which a cursor, a slider bar, a scroll bar, a pull-down menu, a box, a link, and icons of applications are displayed on the display screen, and a user inputs an instruction for moving a cursor, selecting an icon, or starting an application by using an input device.
  • the input device receives inputs by performing image processing for an object such as the hand of a user without requiring any dedicated device such as a mouse.
  • This user interface apparatus is roughly divided into an input function section and a feedback function section.
  • the input function section which can be a well-known mechanism emits light, receives reflected light from an object such as the hand of a user as an image (or receives background light reflected by an object as an image), detects information of the shape, motion, or distance of the object, and performs predetermined control (e.g., control relating to I/O devices or start of application software) in accordance with the shape or the like. That is, this input function section provides a function by which a user can perform an intended input operation by, e.g., moving his or her hand.
  • the input function section includes an image storage unit 11 , a shape interpreting unit 12 , an interpretation rule storage unit 13 , and a presenting unit 14 .
  • the feedback function section checks whether an object of image detection such as the hand of a user exists in a proper range and presents the evaluation result to the user.
  • the feedback function section includes the image storage unit 11 , a proper range evaluating unit 15 , and an evaluation result reflecting unit 16 .
  • the image storage unit 11 which is a common unit of the two functions and an image input device (not shown) will be described first.
  • the image storage unit 11 sequentially stores two-dimensional images of an object of image detection, which are output at predetermined time intervals (e.g., ⁇ fraction (1/30) ⁇ , ⁇ fraction (1/60) ⁇ , or ⁇ fraction (1/100) ⁇ sec) from the image input device (not shown).
  • predetermined time intervals e.g., ⁇ fraction (1/30) ⁇ , ⁇ fraction (1/60) ⁇ , or ⁇ fraction (1/100) ⁇ sec
  • the image input device (not shown) includes a light-emitting unit and a light-receiving unit.
  • the light-emitting unit irradiates light such as near infrared rays onto an object by using light-emitting elements such as LEDs.
  • the light-receiving unit receives the reflected light from the object by using light-receiving elements arranged in the form of a two-dimensional array.
  • the difference between the amount of light received when the light-emitting unit emits light and the amount of light received when the light-emitting unit does not emit light is calculated to correct the background, thereby extracting only a component of the light emitted from the light-emitting unit and reflected by the object.
  • the image input device need not have any light-emitting unit, i.e., can have only a light-receiving unit such as a CCD camera.
  • each pixel value of the reflected light image is affected by the property of the object (e.g., whether the object mirror-reflects, scatters, or absorbs light), the direction of the object surface, the distance to the object, and the like factor. However, if a whole object uniformly scatters light, the amount of the reflected light has a close relationship to the distance to the object. Since a hand has this property, the reflected light image when a user moves his or her hand in front of the image input device reflects the distance to the hand, the inclination of the hand (the distance changes from one portion to another), and the like. Therefore, various pieces of information can be input and generated by extracting these pieces of information.
  • the property of the object e.g., whether the object mirror-reflects, scatters, or absorbs light
  • the direction of the object surface e.g., whether the object mirror-reflects, scatters, or absorbs light
  • the amount of the reflected light has a close relationship to the distance to the object. Since a hand has this property, the reflected
  • FIG. 30A shows an example of a dot matrix when a hand is an object.
  • the shape interpreting unit 12 extracts a predetermined feature amount from a dot matrix and interprets the shape on the basis of interpretation rules stored in the interpretation rule storage unit 13 (step S 103 ).
  • the shape interpreting unit 12 outputs an instruction corresponding to a suitable interpretation rule as an interpretation result (steps S 104 and S 105 ). If there is no suitable interpretation rule, it is also possible, where necessary, to change the way a predetermined feature amount is extracted from a dot matrix (e.g., change a threshold value when dot matrix threshold processing is performed) and again perform the matching processing. If no suitable interpretation rule is finally found (step S 104 ), it is determined that there is no input.
  • the interpretation rule storage unit 13 stores interpretation rules for shape interpretation. For example, predetermined contents such as feature amounts, e.g., the shape, area, uppermost point, and barycenter of an object such as the hand of a user in a dot matrix and designation contents corresponding to these predetermined contents are stored as interpretation rules.
  • the designation contents include, e.g., selection of an icon, start of an application, and movement of a cursor. When cursor movement is to be performed, the moving amount of a cursor corresponding to the direction and the distance of the movement of the hand is also designated. For example, the following rules are possible.
  • a state in which only the index finger is raised is used to indicate cursor movement (in this case, the distance and direction of the movement of the tip of the index finger correspond to the distance and direction of the movement of the cursor).
  • An action of moving the thumb while only the index finger is raised is used to indicate selection of an icon in a position where the cursor exists.
  • An action of turning the palm of the hand while only the index finger is raised is used to indicate start of an application corresponding to an icon in a position where the cursor exists.
  • Representative examples of the extraction of a feature amount from a dot matrix in the shape interpretation by the shape interpreting unit 12 are distance information extraction and region extraction. If an object has a uniform homogeneous scattering surface, the reflected light image can be regarded as a distance image. Accordingly, the three-dimensional shape of the object can be extracted from the light-receiving unit. If the object is a hand, an inclination of the palm of the hand, for example, can be detected. The inclination of the palm of the hand appears as the difference between partial distances. If pixel values change when the hand is moved, it can be considered that the distance changes. Also, almost no light is reflected from a far object such as a background.
  • the shape of an object can be easily cut out. If the object is a hand, for example, it is very easy to cut out the silhouette image of the hand. Even when a distance image is used, a general approach is to once perform region extraction by using a threshold value and then use distance information in the region.
  • Various methods are usable as a method of matching a feature amount extracted from a dot matrix with the interpretation rules. Examples are vector formation by which a vector is extracted from an image, extraction of a shape deformed state based on a shape model, and spectral analysis based on a distance value on a scan line.
  • the matching processing can be reexecuted by changing the threshold value or the like. If no suitable shape is finally found, it is determined that there is no input.
  • the presenting unit 14 performs presentation reflecting the interpretation result from the shape interpreting unit 12 on the display device. For example, the presenting unit 14 moves a cursor, changes the shape of the cursor, and, where necessary, presents messages. Note that the message presentation is performed by using a sound reproducing device singly or in combination with the display device.
  • the proper range evaluating unit 15 fetches the two-dimensional image stored in the image storage unit 11 as a dot matrix as shown in FIG. 30A (step S 102 ), checks whether the object is in a proper range, and generates feedback information corresponding to the evaluation result (steps S 106 through S 116 ).
  • the evaluation result reflecting unit 16 On the basis of the feedback information, the evaluation result reflecting unit 16 outputs an instruction for performing presentation reflecting the evaluation result by using the display device and/or the sound reproducing device (step S 117 ).
  • an appropriate dot matrix as shown in FIG. 30A for example, is obtained, and a desired input operation using the hand of a user or the like is possible.
  • the object is outside the proper range, more specifically, if the object is too close to or too far from the light-receiving unit or protrudes to the left or the from the light-receiving unit, no desired instruction or the like by the user can be input.
  • FIG. 31 shows an example of this evaluation procedure.
  • FIGS. 30A, 34A, 35 A, 36 A, and 37 A show dot matrix examples when an object is in a proper range, is too close, is too far, protrudes to the left, and protrudes to the right, respectively.
  • s be the area of an image of an object
  • d be the distance to a closest point in the image of the object
  • *liter* be the length of a vertical line in the image shape of the object.
  • the area s of the image of the object can be represented by the number of pixels corresponding to the object in a dot matrix or the ratio of these pixels in all pixels of the dot matrix.
  • the distance d to the closest point in the image of the object can be represented by the reciprocal of a maximum value of the density of the pixels corresponding to the object in the dot matrix or by possible highest density of pixels—maximum value of density.
  • the length *liter* of the vertical line in the image shape of the object can be represented by the maximum number of vertically continuous pixels in the outer shape of the image of the object in the dot matrix.
  • a lower-limiting value ⁇ and an upper-limiting value ⁇ of the area s, a lower-limiting value ⁇ and an upper-limiting value ⁇ of the distance d to the closest point in the image, and an upper-limiting value ⁇ of the length *liter* of the vertical line in the image shape are set.
  • step S 121 If ⁇ area s ⁇ , ⁇ distance d to closest point ⁇ , and length *liter* of vertical line ⁇ (step S 121 ), the range is proper (step S 122 ).
  • step S 123 If area s> ⁇ and distance d to closest point ⁇ (step S 123 ), the object is too close (step S 124 ).
  • step S 125 If area s ⁇ and distance d to closest point> ⁇ (step S 125 ), the object is too far (step S 126 ).
  • step S 127 If length *liter* of vertical line> ⁇ and the position of the vertical line is right (step S 127 ), the object is protruding to the right (step S 128 ).
  • step S 129 If length *liter* of vertical line> ⁇ and the position of the vertical line is left (step S 129 ), the object is protruding to the left (step S 130 ).
  • FIG. 33 shows an example of this process procedure.
  • the evaluation result reflecting unit 16 deforms the shape of a cursor displayed on the display screen on the basis of the user feedback information supplied as the evaluation result from the proper range evaluating unit 15 , thereby informing a user of the evaluation result.
  • FIG. 30B shows an example of this state.
  • the unit 16 displays an error message (step S 151 ).
  • FIG. 38 shows another example of this process procedure.
  • the evaluation result reflecting unit 16 informs a user of the evaluation result by sound by using the sound reproducing device on the basis of the user feedback information supplied as the evaluation result from the proper range evaluating unit 15 .
  • step S 169 If user feedback information right (step S 169 ), the unit 16 causes the sound reproducing device to output voice such as “protruding to the right” (step S 170 ).
  • the unit 16 If user feedback information improper, the unit 16 presents a voice error message.
  • messages can also be informed by images and sounds by using both the processing in FIG. 33 and the processing in FIG. 38.
  • each of the above functions can be realized by software. Furthermore, these functions can be practiced as mechanically readable media recording programs for allowing a computer to execute the procedures or the means described above.

Abstract

A user interface apparatus for performing input by image processing of an input image includes a unit for switching a mode for performing pointing and other modes on the basis of a result of the image processing of the input image. A user interface apparatus for performing input by image processing of an input image includes a unit for switching at least a cursor move mode, a select mode, and a double click mode on the basis of a result of the image processing of the input image. A user interface apparatus for performing input by image processing includes a unit for checking whether an object of image processing is in a proper range within which the image processing is possible, and a unit for presenting at least one of predetermined visual information and audio information, if it is determined that the object is outside the proper range. For example, the cursor is made smaller and/or lighter in color or made larger if the object is farther or closer, respectively, than the proper range, and the left or the right side of the cursor is deformed if the object falls outside the proper range to the left or the right.

Description

    BACKGROUND OF THE INVENTION
  • This application is based on Japanese Patent Applications No. 9-9496 filed on Jan. 22, 1997 and No. 9-9773 filed on Jan. 22, 1997, the contents of which are cited herein by reference. [0001]
  • The present invention relates to a user interface apparatus and an input method of performing input by image processing. [0002]
  • A mouse is overwhelmingly used as a computer input device. However, operations performable by using a mouse are, e.g., cursor movement and menu selection, so a mouse is merely a two-dimensional pointing device. Since information which can be handled by a mouse is two-dimensional information, it is difficult to select an object with a depth, e.g., an object in a three-dimensional space. Also, in the formation of animation, it is difficult for an input device such as a mouse to add natural motions to characters. [0003]
  • To compensate for the difficulties of pointing in a three-dimensional space, several apparatuses have been developed. Examples are an apparatus for inputting information in six-axis directions by pushing and rolling a ball in a desired direction and apparatuses called a data glove, a data suit, and a cyber glove which are fitted on a hand or the like. Unfortunately, these apparatuses are presently less popular than they were initially expected because of their poor operability. [0004]
  • On the other hand, a direct indicating type input apparatus has been recently developed by which a user can input intended information by gesture without handling any special equipment. [0005]
  • For example, light is irradiated, reflected light from the hand of a user is received, and an image of the received light is formed to perform fine extraction or shape recognition processing, thereby executing control in accordance with the shape of the hand, moving a cursor in accordance with the moving amount of the hand, or changing the visual point in a three-dimensional model. [0006]
  • Alternatively, the motion of the hand of a user is videotaped, and processes similar to those described above are performed by analyzing the video image. [0007]
  • By the use of these apparatuses, a user can easily perform input by gesture without attaching any special equipment. [0008]
  • In these apparatuses, however, various modes such as a cursor move mode, a select mode, and a double click mode are fixedly used. To change the mode, therefore, a user must perform an explicit operation of changing the mode, resulting in an additional operation load on the user. [0009]
  • Also, in these apparatuses, a light-receiving device for detecting an object is fixedly installed. This limits the range within which the hand of a user or the like can be correctly detected. Accordingly, depending on the position of the hand of a user or the like, the shape or the motion of the hand or the like cannot be accurately detected. The result is the inability to realize control or the like desired by the user. Additionally, it is difficult for the user to immediately recognize the above-mentioned detectable range in a three-dimensional space. Therefore, the user must learn operations in the detectable range from experience. This also results in an additional operation load on the user. [0010]
  • BRIEF SUMMARY OF THE INVENTION
  • It is an object of the present invention to provide a user interface apparatus for performing input by image processing, which reduces the operation load on a user and is easier to use, and an instruction input method. [0011]
  • It is another object of the present invention to provide a user interface apparatus for performing an input operation by image processing, which reduces the operation load on a user and is easier to use, and an operation range presenting method. [0012]
  • To achieve the above objects, according to the first aspect of the present invention, a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching a mode for performing pointing and other modes on the basis of a result of the image processing of the input image. [0013]
  • According to the second aspect of the present invention, a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching at least a cursor move mode, a select mode, and a double click mode on the basis of a result of the image processing of the input image. [0014]
  • Preferably, the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, wherein the image processing of the input image is performed for a selected object in accordance with a recognition method designated for the object. [0015]
  • Preferably, the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, and means for presenting, near a displayed object indicated by a cursor, information indicating a recognition method designated for the object. [0016]
  • Preferably, the apparatus further comprises means for presenting the result of the image processing of the input image in a predetermined shape on a cursor. [0017]
  • According to still another aspect of the present invention, a user interface apparatus comprises a first device for inputting a reflected image, and a second device for performing input by image processing of an input image, wherein the second device comprises means for designating a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device, and the first device comprises means for performing predetermined image processing on the basis of the designated recognition method, and means for sending back the input image and a result of the image processing to the second device. [0018]
  • Preferably, the first device may further comprise means for requesting the second device to transfer information necessary for image processing suited to a necessary recognition method, if the first device does not have image processing means (recognition engine) suited to the recognition method, and the second device may further comprise means for transferring the requested information to the first device. [0019]
  • Preferably, each of the first and second devices may further comprise means for requesting, when information necessary for image processing suited to a predetermined recognition method in the device is activated first, the other device to deactivate identical information, and means for deactivating information necessary for image processing suited to a predetermined recognition method when requested to deactivate the information by the other device. [0020]
  • According to still another aspect of the present invention, an instruction input method comprises the steps of performing image processing for an input image of an object, and switching a mode for performing pointing and other modes on the basis of a result of the image processing. [0021]
  • According to still another aspect of the present invention, an instruction input method using a user interface apparatus including a first device for inputting a reflected image and a second device for performing input by image processing of an input image comprises the steps of allowing the second device to designate a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device, and allowing the first device to perform predetermined image processing on the basis of the designated recognition method and send back the input image and a result of the image processing to the second device. [0022]
  • The present invention obviates the need for an explicit operation performed by a user to switch modes such as a cursor move mode, a select mode, and a double click mode. [0023]
  • Also, the present invention eliminates the need for calibration done by an operation by a user because a point designated by the user is read by recognition processing and reflected on, e.g., cursor movement on the screen. [0024]
  • Furthermore, the input accuracy and the user operability can be expected to be improved by the use of image processing means (recognition engine) suited to a necessary recognition method. [0025]
  • Additionally, the operation state can be fed back to a user by superposing a semitransparent input image on a cursor. [0026]
  • As described above, the present invention can provide a user interface apparatus which reduces the operation load on a user and is easier to use. [0027]
  • In the present invention, recognition processing is performed to some extent in the first device (on the device side). Therefore, it is possible to distribute the load and increase the speed of the recognition processing as a whole. [0028]
  • Also, the function of a device having an image input function can be improved. [0029]
  • Note that the invention related to each of the above apparatuses can also be established as an invention related to a method. [0030]
  • Note also that each of the above inventions can be established as a mechanically readable medium recording programs for allowing a computer to execute a corresponding procedure or means. [0031]
  • According to still another aspect of the present invention, a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible, and means for presenting at least one of predetermined visual information and audio information, if it is determined that the object is outside the proper range. [0032]
  • In the present invention, if an object such as the hand of a user falls outside a proper range, this information is presented by using a display device or a sound reproducing device. Therefore, the user can easily recognize the proper range in a three-dimensional space and input a desired instruction or the like by performing gesture in the proper range. [0033]
  • According to still another aspect of the present invention, a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object falls outside the proper range, and means for informing a user of a direction in which the object deviates from the proper range by changing a display state of a cursor displayed on a display screen into a predetermined state, if it is determined that the object is outside the proper range. [0034]
  • In the present invention, if an object such as the hand of a user deviates from a proper range, the display state of a cursor displayed on the display screen is changed into a predetermined state to inform the user of a direction in which the object deviates from the proper range. Therefore, the user can visually recognize the direction in which the object deviates from the proper range and can also easily and immediately correct the position of the object. Consequently, the user can easily recognize the proper range in a three-dimensional space and input a desired instruction or the like by performing gesture in the proper range. [0035]
  • Preferably, the cursor is made smaller and/or lighter in color if it is determined that the object is farther than the proper range. [0036]
  • Preferably, the cursor is made larger if it is determined that the object is closer than the proper range. [0037]
  • Preferably, a left side of the cursor is deformed if it is determined that the object falls outside the proper range to the left. [0038]
  • Preferably, a right side of the cursor is deformed if it is determined that the object falls outside the proper range to the right. [0039]
  • Preferably, the apparatus further comprises means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range. [0040]
  • According to still another aspect of the present invention, a user interface apparatus for performing input by image processing comprises means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range, and means for informing a user of a direction in which the object deviates from the proper range by sound, if it is determined that the object is outside the proper range. [0041]
  • In the present invention, if an object such as the hand of a user deviates from a proper range, sound is used to inform the user of a direction in which the object deviates from the proper range. Therefore, the user can visually recognize the direction in which the object deviates from the proper range and can also easily and immediately correct the position of the object. Consequently, the user can easily recognize the proper range in a three-dimensional space and input a desired instruction or the like by performing gesture in the proper range. [0042]
  • According to still another aspect of the present invention, an object operation range presenting method in a user interface apparatus for performing input by image processing of an object comprises the steps of checking whether an object of image processing is in a proper range within which the image processing is possible, and presenting at least one of predetermined visual information and audio information when the object is outside the proper range. [0043]
  • According to still another aspect of the present invention, an object operation range presenting method in a user interface apparatus for performing input by image processing of an object comprises the steps of checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range, and informing a user of a direction in which the object deviates from the proper range by changing a display state of a cursor displayed on a display screen into a predetermined state, if it is determined that the object is outside the proper range. [0044]
  • According to still another aspect of the present invention, an object operation range presenting method in a user interface apparatus for performing input by image processing of an object comprises the steps of checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range, and informing a user of a direction in which the object deviates from the proper range by sound, if it is determined that the object is outside the proper range. [0045]
  • Note that the invention related to each of the above apparatuses can also be established as an invention related to a method. [0046]
  • Note also that each of the above inventions can be established as a mechanically readable medium recording programs for allowing a computer to execute a corresponding procedure or means. [0047]
  • Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the appended claims.[0048]
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention, and together with the general description given above and the detailed description of the preferred embodiments given below, serve to explain the principles of the invention. [0049]
  • FIG. 1 is a block diagram showing an example of the arrangement of an interface apparatus according to the first embodiment of the present invention; [0050]
  • FIG. 2 is a block diagram showing an example of the arrangement of an image input unit; [0051]
  • FIG. 3 is a view for explaining the relationship between a display device, the housing of the image input unit, and an object; [0052]
  • FIGS. 4A through 4H are flow charts showing an example of the operation procedure of the user interface apparatus of the first embodiment; [0053]
  • FIG. 5 is a view showing an example of an input image which indicates gesture for cursor control; [0054]
  • FIGS. 6A and 6B are views showing examples of screen displays; [0055]
  • FIGS. 7A and 7B are views showing examples of screen displays; [0056]
  • FIGS. 8A and 8B are views showing examples of screen displays; [0057]
  • FIG. 9 is a view showing an example of an input image which indicates gesture for selection; [0058]
  • FIGS. 10A and 10B are views showing examples of screen displays; [0059]
  • FIGS. 11A and 11B are views showing examples of screen displays; [0060]
  • FIG. 12 is a view showing an example of an input image which indicates gesture for double click; [0061]
  • FIGS. 13A and 13B are views showing examples of screen displays; [0062]
  • FIG. 14 is a view showing an example of a screen display; [0063]
  • FIGS. 15A and 15B are views showing examples of screen displays; [0064]
  • FIGS. 16A and 16B are views showing examples of screen displays; [0065]
  • FIGS. 17A and 17B are views showing examples of screen displays; [0066]
  • FIGS. 18A and 18B are views showing examples of screen displays; [0067]
  • FIG. 19 is a view for explaining processing corresponding to a designated recognition engine; [0068]
  • FIG. 20 is a view showing examples of descriptions of recognition engines for different objects; [0069]
  • FIG. 21 is a block diagram showing an example of the arrangement of an interface apparatus according to the second embodiment of the present invention; [0070]
  • FIG. 22 is a view showing an example of a description in an active list storage unit when a vertical slider bar is selected; [0071]
  • FIG. 23 is a block diagram showing another example of the arrangement of the interface apparatus according to the second embodiment of the present invention; [0072]
  • FIG. 24 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment; [0073]
  • FIG. 25 is a block diagram showing still another example of the arrangement of the interface apparatus according to the second embodiment of the present invention; [0074]
  • FIG. 26 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment; [0075]
  • FIG. 27 is a flow chart showing an example of the operation procedure of the user interface apparatus of the second embodiment; [0076]
  • FIG. 28 is a block diagram showing an example of the arrangement of a user interface apparatus according to still another embodiment of the present invention; [0077]
  • FIG. 29 is a flow chart showing an example of the process procedure of this embodiment; [0078]
  • FIGS. 30A and 30B are views showing examples of a dot matrix and a screen display, respectively, when an object is in a proper range; [0079]
  • FIG. 31 is a flow chart showing an example of the evaluation procedure of checking whether an object is in a proper range; [0080]
  • FIG. 32 is a view for explaining a length *liter* of a vertical line in the image shape of an object; [0081]
  • FIG. 33 is a flow chart showing an example of the procedure of reflecting the evaluation result; [0082]
  • FIGS. 34A and 34B are views showing examples of a dot matrix and a screen display, respectively, when an object is too close; [0083]
  • FIGS. 35A and 35B are views showing examples of a dot matrix and a screen display, respectively, when an object is too far; [0084]
  • FIGS. 36A and 36B are views showing examples of a dot matrix and a screen display, respectively, when an object protrudes to the left; [0085]
  • FIGS. 37A and 37B are views showing examples of a dot matrix and a screen display, respectively, when an object protrudes to the right; and [0086]
  • FIG. 38 is a flow chart showing another example of the procedure of reflecting the evaluation result.[0087]
  • DETAILED DESCRIPTION OF THE INVENTION
  • Embodiments of the present invention will be described below with reference to the accompanying drawings. [0088]
  • The first embodiment will be described below. [0089]
  • FIG. 1 is a block diagram showing the arrangement of a user interface apparatus according to the first embodiment of the present invention. [0090]
  • This user interface apparatus is suitably applicable to, e.g., a computer having a graphical user interface. That is, this apparatus is a system in which a cursor, a slider bar, a scroll bar, a pull-down menu, a box, a link, and icons of applications are displayed on the display screen, and a user inputs an instruction for moving a cursor, selecting an icon, or starting an application by using an input device. The input device receives inputs by performing image processing for an object such as the hand of a user without requiring any dedicated device such as a mouse. [0091]
  • Briefly, the apparatus of this embodiment receives reflected light from an object such as the hand of a user as an image (or receives background light reflected by an object as an image), detects information of the shape, motion, or distance of the object, and performs predetermined control (e.g., control relating to I/O devices or start of application software) in accordance with the shape or the like. That is, this embodiment provides a function by which a user can perform an intended input operation by, e.g., moving his or her hand. Also, modes such as a cursor move mode, an icon select mode, and an application start mode are switched in accordance with the image processing result. Therefore, the user need not perform any explicit operation of switching the modes. [0092]
  • This user interface apparatus includes an [0093] image input unit 10, an image storage unit 11, a shape interpreting unit 12, an interpretation rule storage unit 13, a presenting unit 14, and a cursor switching unit 15.
  • FIG. 2 shows an example of the arrangement of the [0094] image input unit 10.
  • The [0095] image input unit 10 includes a light-emitting unit 101, a reflected light extracting unit 102, and a timing controller 103. The light-emitting unit 101 irradiates light such as near infrared rays onto an object by using light-emitting elements such as LEDs. The reflected light extracting unit 102 receives the reflected light from the object by using light-receiving elements arranged in the form of a two-dimensional array. The timing controller 103 controls the operation timings of the light-emitting unit 101 and the reflected light extracting unit 102. The difference between the amount of light received by the reflected light extracting unit 102 when the light-emitting unit 101 emits light and the amount of light received by the reflected light extracting unit 102 when the light-emitting unit 101 does not emit light is calculated to correct the background, thereby extracting only a component of the light emitted from the light-emitting unit 101 and reflected by the object. Note that the image input unit 10 need not have any light-emitting unit, i.e., can have only a light-receiving unit such as a CCD camera.
  • FIG. 3 shows the relationship between a [0096] display device 20, a housing 8 of the image input unit 10, and an object 22. For example, when a user puts a hand 22 before the image input unit 10, an image of the reflected light from the hand is obtained. Each pixel value of the reflected light image is affected by the property of the object (e.g., whether the object mirror-reflects, scatters, or absorbs light), the direction of the object surface, the distance to the object, and the like factor. However, if a whole object uniformly scatters light, the amount of the reflected light has a close relationship to the distance to the object. Since a hand has this property, the reflected light image when a user puts his or her hand before the image input unit reflects the distance to the hand, the inclination of the hand (the distance changes from one portion to another), and the like. Therefore, various pieces of information can be input and generated by extracting these pieces of information.
  • The [0097] image storage unit 11 sequentially stores two-dimensional images of an object of image detection, which are output at predetermined time intervals (e.g., {fraction (1/30)}, {fraction (1/60)}, or {fraction (1/100)} sec) from the image input unit 10.
  • The [0098] shape interpreting unit 12 sequentially fetches, as N×N (e.g., 64×64) dot matrices, the two-dimensional images stored in the image storage unit 11. Note that each pixel has gradation levels (e.g., 8 bits=256 gradation levels).
  • Also, the [0099] shape interpreting unit 12 extracts a predetermined feature amount from a dot matrix and interprets the shape on the basis of the interpretation rules stored in the interpretation rule storage unit 13. The shape interpreting unit 12 outputs an instruction corresponding to a suitable interpretation rule as a interpretation result. If there is no suitable interpretation rule, it is also possible, where necessary, to change the way a predetermined feature amount is extracted from a dot matrix (e.g., change a threshold value when dot matrix threshold processing is performed) and again perform the matching processing. If no suitable interpretation rule is finally found, it is determined that there is no input.
  • The interpretation [0100] rule storage unit 13 stores interpretation rules for shape interpretation. For example, predetermined contents such as feature amounts, e.g., the shape, area, uppermost point, and the center of gravity of an object such as the hand of a user in a dot matrix and designation contents corresponding to these predetermined contents are stored as interpretation rules. The designation contents include, e.g., selection of an icon, start of an application, and movement of a cursor. When cursor movement is to be performed, the moving amount of a cursor corresponding to the direction and the distance of the movement of the hand is also designated. For example, the following rules are possible. That is, a state in which the thumb and the index finger are open and raised is used to indicate cursor movement (in this case, the distance and the direction of the movement of the tip of the index finger correspond to the distance and the direction of the movement of the cursor). A state in which the thumb and the index finger are closed and raised is used to indicate selection of an icon in a position where the cursor exists. A state in which the thumb and the index finger are raised and the palm of the hand is turned from that in the cursor movement is used to indicate start of an application corresponding to an icon in a position where the cursor exists. Examples of the stored rules in the interpretation rule storage unit 13 are as follows.
  • Select recognition engine: select engine [0101]
  • direction: forward direction←a direction in which the thumb of the right hand is positioned on the leftmost side [0102]
  • thumb: stretched←all joints of the finger are stretched [0103]
  • index finger: stretched←all joints of the finger are stretched [0104]
  • middle finger: bent←all joints of the finger are bent [0105]
  • ring finger: bent←all joints of the finger are bent [0106]
  • little finger: bent←all joints of the finger are bent [0107]
  • Select recognition engine: execute engine [0108]
  • IF immediately preceding selected engine: select engine←a condition is that an immediately preceding selected engine is a select engine [0109]
  • rotational angle from immediately preceding selected engine: 180°←a condition is that a rotational angle from a select engine is 180°[0110]
  • direction: reverse direction←a direction in which the thumb of the right hand is positioned on the rightmost side [0111]
  • thumb: stretched←all joints of the finger are stretched [0112]
  • index finger: stretched←all joints of the finger are stretched [0113]
  • middle finger: bent←all joints of the finger are bent [0114]
  • ring finger: bent←all joints of the finger are bent [0115]
  • little finger: bent←all joints of the finger are bent [0116]
  • Representative examples of the extraction of a feature amount from a dot matrix in the shape interpretation by the [0117] shape interpreting unit 12 are distance information extraction and region extraction. If an object has a uniform homogeneous scattering surface, the reflected light image can be regarded as a distance image. Accordingly, the three-dimensional shape of the object can be extracted from the light-receiving unit. If the object is a hand, an inclination of the palm of the hand, for example, can be detected. The inclination of the palm of the hand appears as the difference between partial distances. If pixel values change when the hand is moved, it can be considered that the distance changes. Also, almost no light is reflected from a far object such as a background. Therefore, in processing of cutting out a region having a certain threshold value or more from a reflected light image, the shape of an object can be easily cut out. If the object is a hand, for example, it is very easy to cut out the silhouette image of the hand. Even when a distance image is used, a general approach is to once perform region extraction by using a threshold value and then use distance information in the region.
  • Various methods are usable as a method of matching a feature amount extracted from a dot matrix with the interpretation rules. Examples are vector formation by which a vector is extracted from an image, extraction of a shape deformed state based on a shape model, and spectral analysis based on a distance value on a scan line. [0118]
  • If there is no suitable shape, the matching processing can be reexecuted by changing the threshold value or the like. If no suitable shape is finally found, it is determined that there is no input. [0119]
  • If the [0120] shape interpreting unit 12 determines that an instruction is for starting the function of an application or an OS, the corresponding software is started.
  • The presenting [0121] unit 14 performs presentation reflecting the interpretation result from the shape interpreting unit 12 on the display device. For example, the movement of a cursor and, where necessary, messages are presented.
  • The [0122] cursor switching unit 15 controls cursor switching on the basis of the interpretation result from the shape interpreting unit 12.
  • FIGS. 4A through 4H show an example of the operation procedure of the user interface apparatus of this embodiment. [0123]
  • First, a cursor control state C is initialized (C←0), a selected state S is initialized (S←0), cursor information I is initialized (I←0), and a recognition engine flag R is initialized (R←0) (step S[0124] 1).
  • Next, a reflected image is written in the image storage unit [0125] 11 (step S2).
  • A dot matrix is loaded to the shape interpreting unit [0126] 12 (step S3).
  • Subsequently, the [0127] shape interpreting unit 12 checks a mode indicated by gesture from a feature amount extracted from the dot matrix and the interpretation rules (step S4).
  • Thereafter, the processing branches in accordance with the determination result and the parameter values. [0128]
  • If the gesture indicates cursor control and the parameters are C=0, S=0, and R=0 (step S[0129] 5), the processing is cursor control. Therefore, C←1 is set (step S11), and the flow returns to step S2.
  • The gesture indicating cursor control is detected when, for example, the shape of a hand shown in FIG. 5 is recognized. That is, the gesture is detected when the thumb and the index finger of the right hand are open and raised upward. [0130]
  • If the gesture indicates cursor control and the parameters are C=1, S=0, and R=0 (step S[0131] 6), the processing is cursor movement. If this is the case, coordinates (x,y) of a close point are calculated from the dot matrix (step S12), and the cursor is moved to the calculated coordinates (x,y) (step S13). The calculated coordinates (x,y) are held. Cp=(x,y) is set (step S14). If an object exists in Cp (step S15), the state of the object is set (I←object state) (step S16). If there is no object (step S15), I=0 is set (step S17), and the flow returns to step S2.
  • If the gesture indicates cursor control and the parameters are C=1 and S=1, the processing is returned to cursor control. Therefore, S←0, R←0, and I←0 are set (step S[0132] 18), and the flow returns to step S2.
  • If the gesture indicates selection and the parameters are C=1, S=0, and R=0 (step S[0133] 8), the processing is selection of an object. If this is the case, S←1 is set (step S19), an object closest to Cp is searched (step S20), and the searched object is selected (step S21). If the selected object has a designated recognition engine (step S22), R←1 is set (step S23), and the flow returns to step S2. If the selected object has no designated recognition engine and is a link object (step S24), the flow jumps to the link destination (step S25), C←0, S←0, and I←0 are set (step S26), and the flow returns to step S2. If the selected object is not a link object (step S24), the flow immediately returns to step S2.
  • The gesture indicating selection is detected when, for example, the shape of a hand shown in FIG. 9 is recognized. That is, the gesture is detected when the thumb and the index finger of the right hand are closed and raised upward. [0134]
  • A recognition engine is for extracting a predetermined feature amount from a dot matrix as will be described in detail later, and various recognition engines are usable. One example is an uppermost point vertical direction engine which extracts the vertical moving amount of the uppermost point of an object shape in a dot matrix. [0135]
  • If the gesture indicates selection and R=1 (step S[0136] 9), the processing is movement of the selected object. Therefore, recognition meeting the object to be recognized is performed (step S27), and the flow returns to step S2.
  • If the gesture indicates double click (execution) and C=1 (step S[0137] 10), the processing is double click. If this is the case, an object closest to Cp is open (step S28), C←0, S←0, I←0, and R←0 are set (step S29), and the flow returns to step S2.
  • The gesture indicating double click is detected when, e.g., the shape of a hand shown in FIG. 12 is recognized. That is, the gesture is detected when the thumb and the index finger of the left hand are open and raised upward. [0138]
  • In other cases than described above, another recognition processing is performed (step S[0139] 30), and the flow returns to step S2.
  • This embodiment will be described below by way of its practical example. [0140]
  • First, assume that cursor movement is designated when the shape of a hand in which the thumb and the index finger are stretched as shown in FIG. 5 is recognized. [0141]
  • If the hand shape in FIG. 5 is recognized in a state shown in FIG. 6A and the user moves the hand in the state shown in FIG. 5, a [0142] cursor 201 moves on the display screen accordingly. If this is the case, the amount and direction of the movement in a fixed point in a dot matrix of the user's hand shape, e.g., an uppermost point (e.g., the tip of the index finger) which is in the uppermost position in the vertical direction in the image or a nearest point (e.g., a point with the highest gradation level) which is nearest to the light-receiving unit in the image are extracted.
  • Note that as shown in FIG. 6B, it is also possible to display a [0143] semitransparent input image 202 on the cursor 201 to feed the recognition state back to the user.
  • If the cursor exists on an object such as a slider bar or a link node, the shape of the cursor is so changed as to present the function of the object. [0144]
  • For example, if the cursor moves to a slider bar as shown in FIG. 7A, the shape of the cursor is changed as indicated by [0145] reference numeral 203. Note that a semitransparent input image 205 can also be displayed on the cursor as shown in FIG. 7B.
  • An [0146] arrow 204 in FIG. 7A indicates that an operation of the slider bar as an object to be operated is limited in the vertical direction. That is, a vertical direction engine is designated for the slider bar. If this is the case, only vertical movement is recognized regardless of how the user moves his or her hand.
  • Also, if the cursor moves to a link node as shown in FIG. 8A, the shape of the cursor is deformed as indicated by [0147] reference numeral 206. Note that a semitransparent input image 207 can also be displayed on the cursor as shown in FIG. 8B.
  • After the cursor is moved to a desired position by the shape of the hand shown in FIG. 5, if the shape of a hand in which the index finger is stretched and the thumb is bent as shown in FIG. 9 is recognized, this means that selection of an object indicated by the cursor is designated. This is equivalent to single click of a mouse. [0148]
  • For example, if the shape of the hand in FIG. 9 is recognized in a state as shown in FIG. 10A, a slider bar is selected. An [0149] arrow 208 in FIG. 10A indicates that an operation of the slider bar as an object to be operated is limited in the vertical direction. Note that a semitransparent input image 209 can also be displayed on the cursor as shown in FIG. 10B.
  • If the shape of the hand in FIG. 9 is recognized in the state shown in FIG. 8A, a link node “OUTLINE OF COMPANY” is selected, and the display contents are changed as shown in FIG. 11A. Note that a [0150] semitransparent input image 210 can also be displayed on the cursor as shown in FIG. 11B.
  • After the cursor is moved to a desired position by the shape of the hand shown in FIG. 5, if the same hand shape except that the wrist is turned approximately 180° as shown in FIG. 12 is recognized, this means that double click of an object indicated by the cursor is designated. [0151]
  • In FIG. 13A, for example, the cursor is moved onto an icon “INTERNET” with the shape of the hand shown in FIG. 5, and subsequently the hand is turned almost 180° as shown in FIG. 12. Consequently, double click of “INTERNET” is accepted. FIG. 13B shows a state immediately before the selected icon is open. Note that a [0152] semitransparent input image 212 can also be displayed on the cursor as shown in FIG. 14.
  • Also, for example, the cursor is moved to “FILE” as shown in FIG. 15B. At this time, a [0153] semitransparent input image 213 can also be displayed on the cursor as shown in FIG. 15B.
  • When “FILE” is selected, a pull-down menu is displayed as shown in FIG. 16A. An [0154] arrow 214 indicates that an operation of the pull-down menu as an object to be operated is limited in the vertical direction.
  • When the cursor is moved onto “SAVE” as shown in FIG. 16A and the hand is turned almost 180° as shown in FIG. 12, double click of “SAVE” is accepted. As in the above operations, a [0155] semitransparent input image 213 can also be displayed on the cursor as shown in FIG. 16B.
  • When “SAVE” is double-clicked, the shape of the cursor is deformed as indicated by [0156] reference numeral 216 in FIG. 17A. This indicates an operation of saving a document is being executed by the selection of save. As in the above operations, a semitransparent input image 217 can also be displayed on the cursor as shown in FIG. 17B.
  • When, for example, the cursor is moved to “FILE” to select “FILE” and then moved to “PRINT” to select “PRINT” as shown in FIG. 18A, a menu corresponding to “PRINT” is displayed. At this time, a [0157] semitransparent input image 219 can also be displayed on the cursor as shown in FIG. 18B.
  • A recognition engine will be described below. [0158]
  • FIG. 19 is a flow chart of processing concerning a recognition engine. [0159]
  • In this embodiment, a recognition engine is designated for each object where necessary. For example, this designation can be performed by programming by a programmer who forms an application program by using the present invention. [0160]
  • A recognition engine extracts a predetermined feature amount from a dot matrix. That is, if there is a recognition engine designated by a selected object (step S[0161] 31), the shape interpreting unit 12 extracts a feature amount from a dot matrix in accordance with the designated recognition engine. If there is no designated recognition engine (step S31), the shape interpreting unit 12 executes normal recognition (step S32).
  • Various recognition engines are usable. Examples are a nearest point vertical direction engine [0162] 121 for extracting a vertical moving amount of a nearest point of an object shape in a dot matrix, a nearest point horizontal direction engine 122 for extracting a horizontal moving amount of the nearest point, a nearest point oblique direction engine 123 for extracting an oblique moving amount of the nearest point, a barycentric point vertical direction engine 124 for extracting a vertical moving amount of a barycentric point, a barycentric point horizontal direction engine 125 for extracting a horizontal moving amount of the barycentric point, a barycentric point oblique direction engine 126 for extracting an oblique moving amount of the barycentric point, an uppermost point vertical direction engine 127 for extracting a vertical moving amount of an uppermost point, an uppermost point horizontal direction engine 128 for extracting a horizontal moving amount of the uppermost point, an uppermost point oblique direction engine 129 for extracting an oblique moving amount of the uppermost point, an edge cutting engine 130 for cutting out the edge of an object shape in a dot matrix, an area calculating engine 131 for calculating the area of an object shape in a dot matrix, a nearest point x-axis rotational angle engine 132 for extracting a rotational angle around the x-axis of a nearest point of an object shape in a dot matrix, a nearest point y-axis rotational angle engine 133 for extracting a rotational angle around the y-axis of the nearest point, a nearest point z-axis rotational angle engine 134 for extracting a rotational angle around the z-axis of the nearest point, a barycentric point x-axis rotational angle engine 135 for extracting a rotational angle around the x-axis of a barycentric point, a barycentric point y-axis rotational angle engine 136 for extracting a rotational angle around the y-axis of the barycentric point, a barycentric point z-axis rotational angle engine 137 for extracting a rotational angle around the z-axis of the barycentric point, an uppermost point x-axis rotational angle engine 138 for extracting a rotational angle around the x-axis of an uppermost point, an uppermost point y-axis rotational angle engine 139 for extracting a rotational angle around the y-axis of the uppermost point, an uppermost point z-axis rotational angle engine 140 for extracting a rotational angle around the z-axis of the uppermost point, and a recognition engine 141 obtained by weighting and combining predetermined engines.
  • FIG. 20 shows examples of descriptions of recognition engines for different objects. [0163]
  • This embodiment as described above obviates the need for an explicit operation performed by a user to switch modes such as a cursor move mode, a select mode, and a double click mode. [0164]
  • Also, the embodiment eliminates the need for calibration done by an operation by a user because a point designated by the user is read by recognition processing and reflected on, e.g., cursor movement on the screen. [0165]
  • Furthermore, the input accuracy and the user operability can be expected to be improved by the use of a recognition engine. [0166]
  • Additionally, the operation state can be fed back to a user by superposing a semitransparent input image on a cursor. [0167]
  • As described above, this embodiment can provide a user interface apparatus which reduces the operation load on a user and is easier to use. [0168]
  • The second embodiment will be described next. [0169]
  • The second embodiment is basically the same as the first embodiment except that a part of recognition processing is performed inside an image input unit (to be referred to as a device side hereinafter) and an [0170] image input unit 10 transfers a dot matrix of an input image and a predetermined recognition result to the main body. Note that the recognition processing performed on the device side is desirably light-load processing.
  • FIG. 21 shows an example of the arrangement of an interface apparatus according to this embodiment. [0171]
  • Referring to FIG. 21, the main body incorporates a [0172] main body controller 32, a presenting unit 14, and a cursor switching unit 15, and the device side incorporates the image input unit 10, an image storage unit 11, a recognition engine controller 30, an active list 31, and several predetermined recognition engines 121, 122, 142, 143, and 144.
  • The [0173] main body controller 32 corresponds to the shape interpreting unit 12 and the interpretation rule storage unit 13 (including the recognition engines) shown in FIG. 1. However, the main body controller 32 can have another arrangement, perform another recognition processing, or use recognition engines.
  • FIG. 22 shows an example of a description in the active list storage unit when a vertical slider bar is selected. FIG. 22 shows that the [0174] cursor engine 142 and the nearest point vertical direction engine 121 are designated.
  • In this arrangement, the main body side sends a list of recognition engines to be activated or a list of recognition engines to be deactivated to the device side. On the device side, this list is stored in the active [0175] list storage unit 31. The recognition engine controller 30 extracts a predetermined feature amount as a recognition result from an input image in accordance with a designated recognition engine and sends back the input image and the recognition result to the main body side.
  • In this embodiment, the device side performs recognition processing to a certain degree. Consequently, it is possible to distribute the load and improve the speed of the recognition processing as a whole. [0176]
  • Also, the function of a device having an image input function can be improved. [0177]
  • FIG. 23 shows another configuration of this embodiment in which a recognition [0178] engine storage unit 33 is added to the configuration in FIG. 21. FIG. 24 shows the flow of processing in FIG. 23. This flow of processing will be described below with reference to FIG. 24.
  • In the arrangement shown in FIG. 21, the main body sends an active list (or an inactive list) which is a list of recognition engines to be activated (or deactivated) on the device side. All recognition engines contained in the active list (or the inactive list) exist on the device side. [0179]
  • If recognition engines used on the device side are few, all of these recognition engines can be mounted on the device side. However, if the number of types of recognition engines is increased, all recognition engines are not thoroughly used even if all these recognition engines are mounted. This results in a decreased economical efficiency. [0180]
  • To eliminate this inconvenience, therefore, in the configuration shown in FIG. 23, if a recognition engine in the active list does not exist on the device side, the main body transfers the recognition engine so that the engine can be operated on the device side. [0181]
  • In step S[0182] 33 of FIG. 24, the main body sends the active list to the device side. The recognition engine controller 30 on the device side checks whether all recognition engines described in the active list exist on the device side (step S34). If all recognition engines exist, the recognition engine controller 30 executes recognition by using these recognition engines (step S38).
  • On the other hand, if the [0183] recognition engine controller 30 determines in step S34 that a described recognition engine does not exist on the device side, the controller 30 sends a transfer request for the corresponding recognition engine to the main body (step S35). Upon receiving the transfer request, the main body controller 32 reads out the corresponding recognition engine from the recognition engine storage unit 33 and transfers the readout engine to the device side. The recognition engine controller 30 on the device side receives the recognition engine (step S36). The received recognition engine is written in the recognition engine controller 30. Simultaneously, the recognition engine controller 30 rewrites the active list stored in the active list storage unit 31 with the active list transferred from the main body (step S37). Thereafter, the recognition engine controller 30 executes recognition (step S38). A recognition engine to be transferred is a copy of a recognition engine stored in the recognition engine storage unit 33.
  • By the above processing, even if the device side does not have a certain recognition engine, recognition can be well executed by transferring the recognition engine from the main body. [0184]
  • When a large number of recognition engines are transferred from the main body side and consequently the recognition engines cannot be stored in the [0185] recognition engine controller 30 any longer, recognition engines not existing in the active list stored in the active list storage unit 31 are deleted, and the recognition engines transferred from the main body side are stored in the resulting empty spaces.
  • As a method of deleting inactive recognition engines, in addition to a method of simply deleting inactive recognition engines it is also possible to use a method of transferring inactive recognition engines to the main body side and storing them in the recognition [0186] engine storage unit 33 to prepare for the next use. If the corresponding recognition engines are already stored in the recognition engine storage unit 33, the transferred inactive recognition engines are discarded without being written.
  • FIG. 25 shows still another configuration of the interface apparatus according to this embodiment. [0187]
  • The main body shown in FIG. 25 is substantially the same as that shown in FIG. 23 except that, if recognition engines exist in a plurality of locations such as the main body side and the device side, the side having a recognition engine which is activated first requests the other side having an identical recognition engine to deactivate that recognition engine. [0188]
  • FIG. 26 shows an example of the operation procedure of this active request. FIG. 27 shows an example of the operation procedure of active request reception. [0189]
  • First, image input is performed on the device side (step S[0190] 39), and recognition is executed on the main body side and/or the device side (step S40). The side which executes recognition transfers the recognition result, an image matrix, and an active list (or an inactive list) to the other side (step S41).
  • Next, the receiving side receives the recognition result, the image matrix, and the active list (or the inactive list) (step S[0191] 42), and rewrites ON recognition engines in the received active list (or OFF recognition engines in the received inactive list) with OFF recognition engines in the stored active list (step S43). The receiving side then executes another processing where necessary (step S44).
  • Note that each of the above functions can also be realized by software. Furthermore, these functions can be practiced as mechanically readable media recording programs for allowing a computer to execute the procedures or the means described above. [0192]
  • FIGS. 28 through 38 show still another embodiment of the present invention. This embodiment checks whether an object of image processing is in a proper range within which the image processing is possible. [0193]
  • This embodiment of the present invention will be described below with reference to the accompanying drawings. [0194]
  • FIG. 28 is a block diagram showing the arrangement of a user interface apparatus according to this embodiment of the present invention. FIG. 29 shows an example of the operation procedure of the user interface apparatus of this embodiment. [0195]
  • This user interface apparatus is suitably applicable to, e.g., a computer having a graphical user interface. That is, this apparatus is a system in which a cursor, a slider bar, a scroll bar, a pull-down menu, a box, a link, and icons of applications are displayed on the display screen, and a user inputs an instruction for moving a cursor, selecting an icon, or starting an application by using an input device. The input device receives inputs by performing image processing for an object such as the hand of a user without requiring any dedicated device such as a mouse. [0196]
  • This user interface apparatus is roughly divided into an input function section and a feedback function section. [0197]
  • The input function section which can be a well-known mechanism emits light, receives reflected light from an object such as the hand of a user as an image (or receives background light reflected by an object as an image), detects information of the shape, motion, or distance of the object, and performs predetermined control (e.g., control relating to I/O devices or start of application software) in accordance with the shape or the like. That is, this input function section provides a function by which a user can perform an intended input operation by, e.g., moving his or her hand. In this embodiment, the input function section includes an [0198] image storage unit 11, a shape interpreting unit 12, an interpretation rule storage unit 13, and a presenting unit 14.
  • The feedback function section according to the present invention checks whether an object of image detection such as the hand of a user exists in a proper range and presents the evaluation result to the user. In this embodiment, the feedback function section includes the [0199] image storage unit 11, a proper range evaluating unit 15, and an evaluation result reflecting unit 16.
  • The [0200] image storage unit 11 which is a common unit of the two functions and an image input device (not shown) will be described first.
  • The [0201] image storage unit 11 sequentially stores two-dimensional images of an object of image detection, which are output at predetermined time intervals (e.g., {fraction (1/30)}, {fraction (1/60)}, or {fraction (1/100)} sec) from the image input device (not shown).
  • The image input device (not shown) includes a light-emitting unit and a light-receiving unit. The light-emitting unit irradiates light such as near infrared rays onto an object by using light-emitting elements such as LEDs. The light-receiving unit receives the reflected light from the object by using light-receiving elements arranged in the form of a two-dimensional array. The difference between the amount of light received when the light-emitting unit emits light and the amount of light received when the light-emitting unit does not emit light is calculated to correct the background, thereby extracting only a component of the light emitted from the light-emitting unit and reflected by the object. Note that the image input device need not have any light-emitting unit, i.e., can have only a light-receiving unit such as a CCD camera. [0202]
  • For example, when a user moves a hand in front of the image input device, an image of the reflected light from the hand is obtained. Each pixel value of the reflected light image is affected by the property of the object (e.g., whether the object mirror-reflects, scatters, or absorbs light), the direction of the object surface, the distance to the object, and the like factor. However, if a whole object uniformly scatters light, the amount of the reflected light has a close relationship to the distance to the object. Since a hand has this property, the reflected light image when a user moves his or her hand in front of the image input device reflects the distance to the hand, the inclination of the hand (the distance changes from one portion to another), and the like. Therefore, various pieces of information can be input and generated by extracting these pieces of information. [0203]
  • The input function section will be explained next. [0204]
  • The [0205] shape interpreting unit 12 sequentially fetches, as an N×N (e.g., 64×64) dot matrix, the two-dimensional images stored in the image storage unit 11 (step S102). Note that each pixel has gradation levels (e.g., 8 bits=256 gradation levels). FIG. 30A shows an example of a dot matrix when a hand is an object.
  • Subsequently, the [0206] shape interpreting unit 12 extracts a predetermined feature amount from a dot matrix and interprets the shape on the basis of interpretation rules stored in the interpretation rule storage unit 13 (step S103). The shape interpreting unit 12 outputs an instruction corresponding to a suitable interpretation rule as an interpretation result (steps S104 and S105). If there is no suitable interpretation rule, it is also possible, where necessary, to change the way a predetermined feature amount is extracted from a dot matrix (e.g., change a threshold value when dot matrix threshold processing is performed) and again perform the matching processing. If no suitable interpretation rule is finally found (step S104), it is determined that there is no input.
  • The interpretation [0207] rule storage unit 13 stores interpretation rules for shape interpretation. For example, predetermined contents such as feature amounts, e.g., the shape, area, uppermost point, and barycenter of an object such as the hand of a user in a dot matrix and designation contents corresponding to these predetermined contents are stored as interpretation rules. The designation contents include, e.g., selection of an icon, start of an application, and movement of a cursor. When cursor movement is to be performed, the moving amount of a cursor corresponding to the direction and the distance of the movement of the hand is also designated. For example, the following rules are possible. That is, a state in which only the index finger is raised is used to indicate cursor movement (in this case, the distance and direction of the movement of the tip of the index finger correspond to the distance and direction of the movement of the cursor). An action of moving the thumb while only the index finger is raised is used to indicate selection of an icon in a position where the cursor exists. An action of turning the palm of the hand while only the index finger is raised is used to indicate start of an application corresponding to an icon in a position where the cursor exists.
  • Representative examples of the extraction of a feature amount from a dot matrix in the shape interpretation by the [0208] shape interpreting unit 12 are distance information extraction and region extraction. If an object has a uniform homogeneous scattering surface, the reflected light image can be regarded as a distance image. Accordingly, the three-dimensional shape of the object can be extracted from the light-receiving unit. If the object is a hand, an inclination of the palm of the hand, for example, can be detected. The inclination of the palm of the hand appears as the difference between partial distances. If pixel values change when the hand is moved, it can be considered that the distance changes. Also, almost no light is reflected from a far object such as a background. Therefore, in processing of cutting out a region having a certain threshold value or more from a reflected light image, the shape of an object can be easily cut out. If the object is a hand, for example, it is very easy to cut out the silhouette image of the hand. Even when a distance image is used, a general approach is to once perform region extraction by using a threshold value and then use distance information in the region.
  • Various methods are usable as a method of matching a feature amount extracted from a dot matrix with the interpretation rules. Examples are vector formation by which a vector is extracted from an image, extraction of a shape deformed state based on a shape model, and spectral analysis based on a distance value on a scan line. [0209]
  • If there is no suitable shape, the matching processing can be reexecuted by changing the threshold value or the like. If no suitable shape is finally found, it is determined that there is no input. [0210]
  • If the interpretation result from the [0211] shape interpreting unit 12 indicates visual information presentation to a user, the presenting unit 14 performs presentation reflecting the interpretation result from the shape interpreting unit 12 on the display device. For example, the presenting unit 14 moves a cursor, changes the shape of the cursor, and, where necessary, presents messages. Note that the message presentation is performed by using a sound reproducing device singly or in combination with the display device.
  • The feedback function section will be described next. [0212]
  • The proper [0213] range evaluating unit 15 fetches the two-dimensional image stored in the image storage unit 11 as a dot matrix as shown in FIG. 30A (step S102), checks whether the object is in a proper range, and generates feedback information corresponding to the evaluation result (steps S106 through S116).
  • On the basis of the feedback information, the evaluation [0214] result reflecting unit 16 outputs an instruction for performing presentation reflecting the evaluation result by using the display device and/or the sound reproducing device (step S117).
  • First, details of the proper [0215] range evaluating unit 15 will be described below.
  • If an object is in a proper range, an appropriate dot matrix as shown in FIG. 30A, for example, is obtained, and a desired input operation using the hand of a user or the like is possible. However, if the object is outside the proper range, more specifically, if the object is too close to or too far from the light-receiving unit or protrudes to the left or the from the light-receiving unit, no desired instruction or the like by the user can be input. [0216]
  • Accordingly, the proper [0217] range evaluating unit 15 analyzes a dot matrix and checks whether the object is in the proper range (step S106), is too close (step S108) or too far (step S110), or protrudes to the left (step S112) or to the right (step S114). If the object is in the proper range, the proper range evaluating unit 15 sets user feedback information=NULL (or a code indicating NULL; e.g., 0) (step S107). If the object is too close, the unit 15 sets user feedback information=close (or a code indicating close; e.g., 1) (step S109). If the object is too far, the unit 15 sets user feedback information=far (or a code indicating far; e.g., 2) (step S111). If the object protrudes to the left, the unit 15 sets user feedback information=left (or a code indicating left; e.g., 3) (step S113). If the object protrudes to the right, the unit 15 sets user feedback information=right (or a code indicating right; e.g., 4) (step S115). Otherwise, the unit 15 sets user feedback information=improper (step S116).
  • FIG. 31 shows an example of this evaluation procedure. FIGS. 30A, 34A, [0218] 35A, 36A, and 37A show dot matrix examples when an object is in a proper range, is too close, is too far, protrudes to the left, and protrudes to the right, respectively.
  • Various methods are possible in checking whether an object is in a proper range. In this embodiment, let s be the area of an image of an object, d be the distance to a closest point in the image of the object, and *liter* be the length of a vertical line in the image shape of the object. The area s of the image of the object can be represented by the number of pixels corresponding to the object in a dot matrix or the ratio of these pixels in all pixels of the dot matrix. The distance d to the closest point in the image of the object can be represented by the reciprocal of a maximum value of the density of the pixels corresponding to the object in the dot matrix or by possible highest density of pixels—maximum value of density. As shown in FIG. 32, the length *liter* of the vertical line in the image shape of the object can be represented by the maximum number of vertically continuous pixels in the outer shape of the image of the object in the dot matrix. [0219]
  • A lower-limiting value γ and an upper-limiting value α of the area s, a lower-limiting value β and an upper-limiting value δ of the distance d to the closest point in the image, and an upper-limiting value ε of the length *liter* of the vertical line in the image shape are set. [0220]
  • If γ≦area s≦α, β≦distance d to closest point≦δ, and length *liter* of vertical line≦ε (step S[0221] 121), the range is proper (step S122).
  • If area s>α and distance d to closest point<β (step S[0222] 123), the object is too close (step S124).
  • If area s<γ and distance d to closest point>δ (step S[0223] 125), the object is too far (step S126).
  • If length *liter* of vertical line>ε and the position of the vertical line is right (step S[0224] 127), the object is protruding to the right (step S128).
  • If length *liter* of vertical line>ε and the position of the vertical line is left (step S[0225] 129), the object is protruding to the left (step S130).
  • Otherwise, the range is improper (step S[0226] 131).
  • Details of the evaluation [0227] result reflecting unit 16 will be described next.
  • FIG. 33 shows an example of this process procedure. In this processing, the evaluation [0228] result reflecting unit 16 deforms the shape of a cursor displayed on the display screen on the basis of the user feedback information supplied as the evaluation result from the proper range evaluating unit 15, thereby informing a user of the evaluation result.
  • If user feedback information=NULL (step S[0229] 141), the evaluation result reflecting unit 16 does not change the shape of the cursor (step S142). FIG. 30B shows an example of this state.
  • If user feedback information=close (step S[0230] 143), the unit 16 makes the cursor larger as shown in FIG. 34B (step S144).
  • If user feedback information=far (step S[0231] 145), the unit 16 makes the cursor smaller and thinner as shown in FIG. 35B (step S146).
  • If user feedback information=left (step S[0232] 147), the unit 16 deforms the left side of the cursor as shown in FIG. 36B (step S148).
  • If user feedback information=right (step S[0233] 149), the unit 16 deforms the right side of the cursor as shown in FIG. 37B (step S150).
  • If user feedback information=improper, the [0234] unit 16 displays an error message (step S151).
  • FIG. 38 shows another example of this process procedure. In this processing, the evaluation [0235] result reflecting unit 16 informs a user of the evaluation result by sound by using the sound reproducing device on the basis of the user feedback information supplied as the evaluation result from the proper range evaluating unit 15.
  • If user feedback information=NULL (step S[0236] 161), the evaluation result reflecting unit 16 does not present anything or presents sound indicating the movement of the cursor (step S162).
  • If user feedback information=close (step S[0237] 163), the unit 16 causes the sound reproducing device to output voice such as “too close” (step S164).
  • If user feedback information=far (step S[0238] 165), the unit 16 causes the sound reproducing device to output voice such as “too far” (step S166).
  • If user feedback information=left (step S[0239] 167), the unit 16 causes the sound reproducing device to output voice such as “protruding to the left” (step S168).
  • If user feedback information right (step S[0240] 169), the unit 16 causes the sound reproducing device to output voice such as “protruding to the right” (step S170).
  • If user feedback information improper, the [0241] unit 16 presents a voice error message.
  • Note that messages can also be informed by images and sounds by using both the processing in FIG. 33 and the processing in FIG. 38. Alternatively, it is possible to prepare a function of informing by images and a function of informing by sounds and allow a user to separately turn on and off these functions. [0242]
  • In this embodiment as described above, if an object such as the hand of a user deviates from a proper range, this information is presented. Therefore, the user can readily recognize the proper range in a three-dimensional space and input a desired instruction or the like by performing gesture in the proper range. [0243]
  • In the procedure shown in FIG. 29, the processing of the input function section and the processing of the feedback function section are independently executed. However, this procedure can be so corrected that the processing of the feedback function section is performed prior to the processing of the input function section and the processing of the input function section is executed only when it is determined that an object is in a proper range. [0244]
  • Also, each of the above functions can be realized by software. Furthermore, these functions can be practiced as mechanically readable media recording programs for allowing a computer to execute the procedures or the means described above. [0245]
  • The present invention is not limited to the above embodiments and can be practiced in the form of various modifications without departing from the technical range of the invention. [0246]
  • Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit and scope of the general inventive concept as defined by the appended claims and their equivalents. [0247]

Claims (30)

1. A user interface apparatus comprising:
means for cutting out an image to be processed from an input image and performing image processing; and
means for switching a mode for performing pointing and other modes on the basis of a result of the image processing of the input image.
2. A user interface apparatus comprising:
means for cutting out an image to be processed from an input image and performing image processing; and
means for switching at least a cursor move mode, a select mode, and a double click mode on the basis of a result of the image processing of the input image.
3. An apparatus according to
claim 1
, further comprising means for designating a recognition method of limiting image processing contents for each object selectable in the select mode,
wherein the image processing of the input image is performed for a selected object in accordance with a recognition method designated for the object.
4. An apparatus according to
claim 2
, further comprising means for designating a recognition method of limiting image processing contents for each object selectable in the select mode,
wherein the image processing of the input image is performed for a selected object in accordance with a recognition method designated for the object.
5. An apparatus according to
claim 1
, further comprising:
means for designating a recognition method of limiting image processing contents for each object selectable in the select mode; and
means for presenting, near a displayed object indicated by a cursor, information indicating a recognition method designated for the object.
6. An apparatus according to
claim 2
, further comprising:
means for designating a recognition method of limiting image processing contents for each object selectable in the select mode; and
means for presenting, near a displayed object indicated by a cursor, information indicating a recognition method designated for the object.
7. An apparatus according to
claim 1
, further comprising means for presenting the result of the image processing of the input image in a predetermined shape on a cursor.
8. An apparatus according to
claim 2
, further comprising means for presenting the result of the image processing of the input image in a predetermined shape on a cursor.
9. A user interface apparatus comprising:
a first device for inputting a reflected image; and
a second device for performing input by image processing of an input image,
wherein said second device comprises means for designating a recognition method of limiting contents of image processing of an input image with respect to said first device, and
said first device comprises means for performing predetermined image processing on the basis of the designated recognition method, and
means for sending back the input image and a result of the image processing to said second device.
10. An apparatus according to
claim 9
, wherein
said first device further comprises means for requesting said second device to transfer information necessary for image processing suited to a necessary recognition method, if said first device does not have image processing means suited to the recognition method, and
said second device further comprises means for transferring the requested information to said first device.
11. An apparatus according to
claim 9
, wherein each of said first and second devices further comprises means for requesting, when information necessary for image processing suited to a predetermined recognition method in the device is activated first, the other device to deactivate identical information, and
means for deactivating information necessary for image processing suited to a predetermined recognition method when requested to deactivate the information by the other device.
12. An instruction input method comprising the steps of:
performing image processing for an input image of an object; and
switching a mode for performing pointing and other modes on the basis of a result of the image processing.
13. An instruction input method using a user interface apparatus including a first device for inputting a reflected image and a second device for performing input by image processing of an input image, comprising the steps of:
allowing said second device to designate a recognition method of limiting contents of image processing of an input image with respect to said first device; and
allowing said first device to perform predetermined image processing on the basis of the designated recognition method and send back the input image and a result of the image processing to said second device.
14. A user interface apparatus for performing input by image processing, comprising:
means for checking whether an object of image processing is in a proper range within which the image processing is possible; and
means for presenting at least one of predetermined visual information and audio information, if it is determined that the object is outside the proper range.
15. A user interface apparatus for performing input by image processing, comprising:
means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range; and
means for informing a user of a direction in which the object deviates from the proper range by changing a display state of a cursor displayed on a display screen into a predetermined state, if it is determined that the object is outside the proper range.
16. An apparatus according to
claim 15
, wherein the cursor is made smaller and/or lighter in color if it is determined that the object is farther than the proper range.
17. An apparatus according to
claim 15
, wherein the cursor is made larger if it is determined that the object is closer than the proper range.
18. An apparatus according to
claim 15
, wherein a left side of the cursor is deformed if it is determined that the object falls outside the proper range to the left.
19. An apparatus according to
claim 15
, wherein a right side of the cursor is deformed if it is determined that the object falls outside the proper range to the right.
20. An apparatus according to
claim 15
, further comprising means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
21. An apparatus according to
claim 16
, further comprising means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
22. An apparatus according to
claim 17
, further comprising means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
23. An apparatus according to
claim 18
, further comprising means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
24. An apparatus according to
claim 19
, further comprising means for informing a user of a direction in which the object deviates from the proper range by using sound, if it is determined that the object is outside the proper range.
25. A user interface apparatus for performing input by image processing, comprising:
means for checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range; and
means for informing a user of a direction in which the object deviates from the proper range by sound, if it is determined that the object is outside the proper range.
26. An object operation range presenting method in a user interface apparatus for performing input by image processing of an object, comprising the steps of:
checking whether an object of image processing is in a proper range within which the image processing is possible; and
presenting at least one of predetermined visual information and audio information when the object is outside the proper range.
27. An object operation range presenting method in a user interface apparatus for performing input by image processing of an object, comprising the steps of:
checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range; and
informing a user of a direction in which the object deviates from the proper range by changing a display state of a cursor displayed on a display screen into a predetermined state, if it is determined that the object is outside the proper range.
28. An object operation range presenting method in a user interface apparatus for performing input by image processing of an object, comprising the steps of:
checking whether an object of image processing is in a proper range within which the image processing is possible and, if the object is outside the proper range, checking a direction in which the object deviates from the proper range; and
informing a user of a direction in which the object deviates from the proper range by sound, if it is determined that the object is outside the proper range.
29. An article of manufacture comprising:
a computer usable medium having computer readable program code means embodied therein for causing an instruction to be input, the computer readable program code means in said article of manufacture comprising:
computer readable program code means for causing a computer to perform image processing for an input image of an object; and
computer readable program code means for causing a computer to switch a mode for performing pointing and other modes on the basis of a result of the image processing.
30. An article of manufacture comprising:
a computer usable medium having computer readable program code means embodied therein for causing an instruction to be input using a user interface apparatus including a first device for inputting a reflected image and a second device for performing input by image processing of an input image, the computer readable program means in said article of manufacture comprising:
computer readable program code means for causing a computer to allow said second device to designate a recognition method of limiting contents of image processing of an input image with respect to said first device; and
computer readable program code means for causing a computer to allow said first device to perform predetermined image processing on the basis of the designated recognition method and send back the input image and a result of the image processing to said second device.
US09/860,496 1997-01-22 2001-05-21 User interface apparatus and operation range presenting method Abandoned US20010024213A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/860,496 US20010024213A1 (en) 1997-01-22 2001-05-21 User interface apparatus and operation range presenting method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP00977397A JP3819096B2 (en) 1997-01-22 1997-01-22 User interface device and operation range presentation method
JP9-009496 1997-01-22
JP00949697A JP3588527B2 (en) 1997-01-22 1997-01-22 User interface device and instruction input method
US09/009,696 US6266061B1 (en) 1997-01-22 1998-01-20 User interface apparatus and operation range presenting method
US09/860,496 US20010024213A1 (en) 1997-01-22 2001-05-21 User interface apparatus and operation range presenting method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/009,696 Division US6266061B1 (en) 1997-01-22 1998-01-20 User interface apparatus and operation range presenting method

Publications (1)

Publication Number Publication Date
US20010024213A1 true US20010024213A1 (en) 2001-09-27

Family

ID=26344242

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/009,696 Expired - Lifetime US6266061B1 (en) 1997-01-22 1998-01-20 User interface apparatus and operation range presenting method
US09/860,496 Abandoned US20010024213A1 (en) 1997-01-22 2001-05-21 User interface apparatus and operation range presenting method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/009,696 Expired - Lifetime US6266061B1 (en) 1997-01-22 1998-01-20 User interface apparatus and operation range presenting method

Country Status (1)

Country Link
US (2) US6266061B1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050088409A1 (en) * 2002-02-28 2005-04-28 Cees Van Berkel Method of providing a display for a gui
US20070130547A1 (en) * 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
US20070220437A1 (en) * 2006-03-15 2007-09-20 Navisense, Llc. Visual toolkit for a virtual user interface
US20070294639A1 (en) * 2004-11-16 2007-12-20 Koninklijke Philips Electronics, N.V. Touchless Manipulation of Images for Regional Enhancement
US20100269072A1 (en) * 2008-09-29 2010-10-21 Kotaro Sakata User interface device, user interface method, and recording medium
WO2011035723A1 (en) * 2009-09-23 2011-03-31 Han Dingnan Method and interface for man-machine interaction
US20110181703A1 (en) * 2010-01-27 2011-07-28 Namco Bandai Games Inc. Information storage medium, game system, and display image generation method
WO2013093040A1 (en) * 2011-12-23 2013-06-27 Sensomotoric Instruments Gmbh Method and system for presenting at least one image of at least one application on a display device
US8520901B2 (en) 2010-06-11 2013-08-27 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
US8854304B2 (en) 2010-06-11 2014-10-07 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
US10990189B2 (en) * 2007-09-14 2021-04-27 Facebook, Inc. Processing of gesture-based user interaction using volumetric zones

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6525716B1 (en) * 1997-04-01 2003-02-25 Casio Computer Co., Ltd. Handwritten data input device having coordinate detection tablet
JP2000003445A (en) * 1998-06-15 2000-01-07 Toshiba Corp Method and device for extracting information and recording medium
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US20070136064A1 (en) * 2003-04-16 2007-06-14 Carroll David W Mobile personal computer with movement sensor
US20050104850A1 (en) * 2003-11-17 2005-05-19 Chia-Chang Hu Cursor simulator and simulating method thereof for using a limb image to control a cursor
US20050104851A1 (en) * 2003-11-17 2005-05-19 Chia-Chang Hu Cursor simulator and a simulation method thereof for using a laser beam to control a cursor
US7379562B2 (en) * 2004-03-31 2008-05-27 Microsoft Corporation Determining connectedness and offset of 3D objects relative to an interactive surface
US20050227217A1 (en) * 2004-03-31 2005-10-13 Wilson Andrew D Template matching on interactive surface
US7394459B2 (en) * 2004-04-29 2008-07-01 Microsoft Corporation Interaction between objects and a virtual environment display
JP2005339444A (en) * 2004-05-31 2005-12-08 Toshiba Matsushita Display Technology Co Ltd Display device
US7787706B2 (en) 2004-06-14 2010-08-31 Microsoft Corporation Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface
US7593593B2 (en) * 2004-06-16 2009-09-22 Microsoft Corporation Method and system for reducing effects of undesired signals in an infrared imaging system
US7519223B2 (en) 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US7743348B2 (en) * 2004-06-30 2010-06-22 Microsoft Corporation Using physical objects to adjust attributes of an interactive display application
US7692627B2 (en) * 2004-08-10 2010-04-06 Microsoft Corporation Systems and methods using computer vision and capacitive sensing for cursor control
US7576725B2 (en) * 2004-10-19 2009-08-18 Microsoft Corporation Using clear-coded, see-through objects to manipulate virtual objects
US7925996B2 (en) * 2004-11-18 2011-04-12 Microsoft Corporation Method and system for providing multiple input connecting user interface
US7499027B2 (en) * 2005-04-29 2009-03-03 Microsoft Corporation Using a light pointer for input on an interactive display surface
US7525538B2 (en) * 2005-06-28 2009-04-28 Microsoft Corporation Using same optics to image, illuminate, and project
US7911444B2 (en) 2005-08-31 2011-03-22 Microsoft Corporation Input method for surface of interactive display
US8060840B2 (en) 2005-12-29 2011-11-15 Microsoft Corporation Orientation free user interface
US8196055B2 (en) 2006-01-30 2012-06-05 Microsoft Corporation Controlling application windows in an operating system
US7515143B2 (en) * 2006-02-28 2009-04-07 Microsoft Corporation Uniform illumination of interactive display panel
KR101299682B1 (en) * 2006-10-16 2013-08-22 삼성전자주식회사 Universal input device
US8212857B2 (en) * 2007-01-26 2012-07-03 Microsoft Corporation Alternating light sources to reduce specular reflection
EP2428870A1 (en) * 2010-09-13 2012-03-14 Samsung Electronics Co., Ltd. Device and method for controlling gesture for mobile device
US8890803B2 (en) 2010-09-13 2014-11-18 Samsung Electronics Co., Ltd. Gesture control system
US10620775B2 (en) * 2013-05-17 2020-04-14 Ultrahaptics IP Two Limited Dynamic interactive objects
US9436288B2 (en) 2013-05-17 2016-09-06 Leap Motion, Inc. Cursor mode switching
CN106462178A (en) 2013-09-11 2017-02-22 谷歌技术控股有限责任公司 Electronic device and method for detecting presence and motion
DE102015006613A1 (en) * 2015-05-21 2016-11-24 Audi Ag Operating system and method for operating an operating system for a motor vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355180A (en) * 1991-12-19 1994-10-11 Goldstar Co., Ltd. Apparatus for and method of a television receiver incorporating audio/visual warning of viewing distance and audio message record/playback features
US5528332A (en) * 1992-12-28 1996-06-18 Canon Kabushiki Kaisha Camera having in-focus state indicating device
US5699441A (en) * 1992-03-10 1997-12-16 Hitachi, Ltd. Continuous sign-language recognition apparatus and input apparatus
US5870636A (en) * 1996-09-09 1999-02-09 Olympus Optical Co., Ltd. Rangefinding device for camera
US5900863A (en) * 1995-03-16 1999-05-04 Kabushiki Kaisha Toshiba Method and apparatus for controlling computer without touching input device
US5907353A (en) * 1995-03-28 1999-05-25 Canon Kabushiki Kaisha Determining a dividing number of areas into which an object image is to be divided based on information associated with the object
US5917553A (en) * 1996-10-22 1999-06-29 Fox Sports Productions Inc. Method and apparatus for enhancing the broadcast of a live event
US6144366A (en) * 1996-10-18 2000-11-07 Kabushiki Kaisha Toshiba Method and apparatus for generating information input using reflected light image of target object
US20040066970A1 (en) * 1995-11-01 2004-04-08 Masakazu Matsugu Object extraction method, and image sensing apparatus using the method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3244798B2 (en) * 1992-09-08 2002-01-07 株式会社東芝 Moving image processing device
JPH0863326A (en) * 1994-08-22 1996-03-08 Hitachi Ltd Image processing device/method
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355180A (en) * 1991-12-19 1994-10-11 Goldstar Co., Ltd. Apparatus for and method of a television receiver incorporating audio/visual warning of viewing distance and audio message record/playback features
US5699441A (en) * 1992-03-10 1997-12-16 Hitachi, Ltd. Continuous sign-language recognition apparatus and input apparatus
US5528332A (en) * 1992-12-28 1996-06-18 Canon Kabushiki Kaisha Camera having in-focus state indicating device
US5900863A (en) * 1995-03-16 1999-05-04 Kabushiki Kaisha Toshiba Method and apparatus for controlling computer without touching input device
US5907353A (en) * 1995-03-28 1999-05-25 Canon Kabushiki Kaisha Determining a dividing number of areas into which an object image is to be divided based on information associated with the object
US20040066970A1 (en) * 1995-11-01 2004-04-08 Masakazu Matsugu Object extraction method, and image sensing apparatus using the method
US5870636A (en) * 1996-09-09 1999-02-09 Olympus Optical Co., Ltd. Rangefinding device for camera
US6144366A (en) * 1996-10-18 2000-11-07 Kabushiki Kaisha Toshiba Method and apparatus for generating information input using reflected light image of target object
US5917553A (en) * 1996-10-22 1999-06-29 Fox Sports Productions Inc. Method and apparatus for enhancing the broadcast of a live event

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050088409A1 (en) * 2002-02-28 2005-04-28 Cees Van Berkel Method of providing a display for a gui
US8473869B2 (en) * 2004-11-16 2013-06-25 Koninklijke Philips Electronics N.V. Touchless manipulation of images for regional enhancement
US20070294639A1 (en) * 2004-11-16 2007-12-20 Koninklijke Philips Electronics, N.V. Touchless Manipulation of Images for Regional Enhancement
US20070130547A1 (en) * 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
US20070220437A1 (en) * 2006-03-15 2007-09-20 Navisense, Llc. Visual toolkit for a virtual user interface
US8578282B2 (en) * 2006-03-15 2013-11-05 Navisense Visual toolkit for a virtual user interface
US10990189B2 (en) * 2007-09-14 2021-04-27 Facebook, Inc. Processing of gesture-based user interaction using volumetric zones
US20100269072A1 (en) * 2008-09-29 2010-10-21 Kotaro Sakata User interface device, user interface method, and recording medium
US8464160B2 (en) 2008-09-29 2013-06-11 Panasonic Corporation User interface device, user interface method, and recording medium
WO2011035723A1 (en) * 2009-09-23 2011-03-31 Han Dingnan Method and interface for man-machine interaction
US20110181703A1 (en) * 2010-01-27 2011-07-28 Namco Bandai Games Inc. Information storage medium, game system, and display image generation method
US8520901B2 (en) 2010-06-11 2013-08-27 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
US8854304B2 (en) 2010-06-11 2014-10-07 Namco Bandai Games Inc. Image generation system, image generation method, and information storage medium
WO2013093040A1 (en) * 2011-12-23 2013-06-27 Sensomotoric Instruments Gmbh Method and system for presenting at least one image of at least one application on a display device
GB2512518A (en) * 2011-12-23 2014-10-01 Sensomotoric Instr Ges Für Innovative Sensorik Mbh Method and system for presenting at least one image of at least one application on a display device
US9395812B2 (en) 2011-12-23 2016-07-19 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Method and system for presenting at least one image of at least one application on a display device
GB2512518B (en) * 2011-12-23 2017-06-14 Sensomotoric Instr Ges Fur Innovative Sensorik Mbh Method and system for presenting at least one image of at least one application on a display device
DE112012005414B4 (en) 2011-12-23 2022-04-28 Apple Inc. Method and system for displaying at least one image of at least one application on a display device

Also Published As

Publication number Publication date
US6266061B1 (en) 2001-07-24

Similar Documents

Publication Publication Date Title
US6266061B1 (en) User interface apparatus and operation range presenting method
US7770120B2 (en) Accessing remote screen content
EP0637795B1 (en) Gestural indicators for selecting graphic objects
US6043805A (en) Controlling method for inputting messages to a computer
EP0856786B1 (en) Window Displaying apparatus and method
US8799821B1 (en) Method and apparatus for user inputs for three-dimensional animation
JP5552772B2 (en) Information processing apparatus, information processing method, and computer program
JP4688739B2 (en) Information display device
US20060288314A1 (en) Facilitating cursor interaction with display objects
US6965368B1 (en) Game control device having genre data
US6285374B1 (en) Blunt input device cursor
US20080229254A1 (en) Method and system for enhanced cursor control
US20090153468A1 (en) Virtual Interface System
US10180714B1 (en) Two-handed multi-stroke marking menus for multi-touch devices
JPH10207619A (en) User interface device and operation range indicating method
KR20120085783A (en) Method and interface for man-machine interaction
AU742998B2 (en) Dynamic object linking interface
JP2004303000A (en) Three-dimensional instruction input device
JP4786292B2 (en) Information processing apparatus, hierarchical information output method, and program
JP2004246814A (en) Indication movement recognition device
JP3588527B2 (en) User interface device and instruction input method
US6806878B2 (en) Graphic editing apparatus for adding or deleting curve to/from graphics by interactive processing
JP4563723B2 (en) Instruction motion recognition device and instruction motion recognition program
KR101559424B1 (en) A virtual keyboard based on hand recognition and implementing method thereof
US11301125B2 (en) Vector object interaction

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION