CN102473041A - Image recognition device, operation determination method, and program - Google Patents

Image recognition device, operation determination method, and program Download PDF

Info

Publication number
CN102473041A
CN102473041A CN2010800356938A CN201080035693A CN102473041A CN 102473041 A CN102473041 A CN 102473041A CN 2010800356938 A CN2010800356938 A CN 2010800356938A CN 201080035693 A CN201080035693 A CN 201080035693A CN 102473041 A CN102473041 A CN 102473041A
Authority
CN
China
Prior art keywords
operator
image
mentioned
face
pseudo operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010800356938A
Other languages
Chinese (zh)
Other versions
CN102473041B (en
Inventor
泉贤二
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DAO GENXIAN
Shimane Prefecture
Original Assignee
DAO GENXIAN
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DAO GENXIAN filed Critical DAO GENXIAN
Publication of CN102473041A publication Critical patent/CN102473041A/en
Application granted granted Critical
Publication of CN102473041B publication Critical patent/CN102473041B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0317Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Abstract

The present invention provides an image recognition device, an operating determination method, and a program to allow accurate determination of operations. An image-reading unit (301) reads (S401) image data captured by a video camera (201). An image-extraction unit (302) then extracts (S402) from said data an image of an operator. After preparation along these lines, a virtual operation surface and an operation region are formed (S403) on the basis of the extracted image of the operator (102). If the operator is an adult (810), the operation region (811) can be formed taking into consideration the height (line-of-sight position) and arm length of the operator; if the operator is a child (820), the operator height and arm length are shorter, and the operation region (821) can be set to match.

Description

Pattern recognition device, operation judges method and program
Technical field
The present invention relates to pattern recognition device and operation judges method, in more detail, relate to pattern recognition device and the operation judges method from the image that obtains by shootings such as video cameras, the action of measuring object judged.
Background technology
In recent years; As the interface between computing machine, electronic equipment and the people, be man-machine interface and proposed various device, method; Particularly proposed a kind of technology and promptly in game machine, operation guide equipment etc., used the camera whole or part of person of coming the shooting operation, come the action of going forward side by side of decision operation person's purpose to do according to its image.For example, in patent documentation 1, proposed a kind of technology, promptly possessed: principal computer, its shape to the object in the image that is obtained by the CCD camera, action are discerned; And display; It shows shape, the action of the object that is obtained by principal computer identification; When user plane applied indication towards the CCD camera and through gesture etc., the gesture that applies was displayed in the display frame of display, can use the icon of arrow cursor to select to be presented at virtual switch in the display frame etc. through gesture; Do not need input medias such as mouse, just can carry out operation of equipment very simply.
Recently, also proposed a kind of input system, the action that from the image that shooting obtains, will point, shape recognition are certain posture, thereby operate input.For example; Can enough postures carrying out the demonstration of screen operation, do not needing in the input media at noncontact public information system (kiosk) terminal of touch panel; When towards the operator of big picture when the camera that is generally positioned at the picture lower position carries out various operation, its content is reflected on the big picture.Through from the image that such shooting obtains, extract operator's shape, action in method well known in the art; For example compare, thus the meaning of decision operation person's shape, action and be used for the control of equipment with the pattern of confirming in advance and be kept in the database.
On the other hand, shown in figure 13, as the technology that reads, use to support the three-dimensional or three-dimensional camera person that comes the shooting operation to operator's image, also can reproduce stereo-picture, also be used for purposes such as safety inspection.Through reproducing stereo-picture, can grasp operator's action three-dimensionally, for example the action of operator's action, particularly hand can both be discerned in front and back, therefore compares with the situation of using two dimensional image, and the variation of posture increases.Therefore in addition, if extract a plurality of operators,, can only extract the input that top operator's action is used to operate even many people also are divided into context because of stereo-picture as image.
Patent documentation 1: TOHKEMY 2004-078977 communique
Summary of the invention
The problem that invention will solve
Yet the so any standard posture of standard sanctified by usage is not established in posture operation in the past, carries out can't discerning the user intuitively and which type of action can carrying out what kind of operation through the indication operation of XY coordinate except using forefinger.About " click ", " double-click ", " dragging " etc.., have in during stand-by period in several seconds and be fixed on the situation that volume coordinate is indicated clicking operation, but because the waits for too long of setting etc. hinder the situation of comfortable operation quite a few.Thereby, there is following problem: the operations such as (double-click etc.) that do not have a kind of method of reality to understand easily and cosily to click, to determine.
In addition, gesture detection means in the past is different with the input media that the operator as touch panel can directly contact, and gesture detection means in the past is difficult to obtain the clear and definite meaning of operator.That is, there is following problem:, be not easy also to judge that this action is that the action of having a mind to import also only is habitual action even the operator carries out certain action.There is following problem in its result: if for example do not carry out simple posture then can't discern with factitious tangible mode, and regulation in advance that perhaps need be relevant with posture, and can't use complicated posture.
The present invention accomplishes in view of this problem; Its purpose is to provide a kind of pattern recognition device and operation judges method; Through the operator is moved to device identifying to be on the basis of carrying out with the state of the relevant operation of which input, decision operation exactly thus.
The scheme that is used to deal with problems
In order to reach this purpose, the present invention relates to a kind of pattern recognition device, it is characterized in that possessing: the three-dimensional camera shooting unit, its image that obtains the operator generates stereoscopic image data; Operating surface forms the unit, and it forms pseudo operation face according to the operator's who is got access to by above-mentioned three-dimensional camera shooting unit image; The operation judges unit; Its image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And signal output unit, be when operation when being judged as above-mentioned action, the signal of this signal output unit output regulation.
The invention is characterized in that the aforesaid operations judging unit is positioned in aforesaid operations person's a part that to be judged as above-mentioned action when more leaning on the position of above-mentioned three-dimensional camera shooting cell side than above-mentioned pseudo operation face be operation.
The invention is characterized in that the aforesaid operations judging unit is positioned at according to aforesaid operations person's a part and more leans on shape or the action of part of the position of above-mentioned three-dimensional camera shooting cell side to judge than above-mentioned pseudo operation face to carry out which kind of operation.
The invention is characterized in; The aforesaid operations judging unit will be the operation of being imported with the shape of coupling or the corresponding operation judges of moving preserving with the shape of operator's a part in advance or moving and search in the storage unit of corresponding content of operation.
The invention is characterized in; Also possesses the image-display units that the mode with the oriented manipulation person disposes; The aforesaid operations judging unit so that the operator can identifying operation the mode of judged result, make the operation judges result of current time be shown in above-mentioned image-display units.
The invention is characterized in; Also possesses the image-display units that the mode with the oriented manipulation person disposes; When the person's that reads the aforesaid operations in the zone of above-mentioned pseudo operation layer action, in above-mentioned image-display units, show and allocate sign in advance to this pseudo operation layer.
The invention is characterized in; Also possesses the image-display units that can carry out visual observation by the aforesaid operations person; This image-display units is according to concerning in the aforesaid operations person's of the opposition side of above-mentioned three-dimensional camera shooting unit a part and the position between the above-mentioned pseudo operation face and calculate corresponding distance with respect to being formed pseudo operation face that the unit forms by the aforesaid operations face; Show the sign that correspondingly changes with this distance, thereby show the operation that to judge.
The invention is characterized in, be positioned at when more leaning on the position of above-mentioned three-dimensional camera shooting cell side that above-mentioned image-display units stops the variation of this sign and shows the operation of judgement than above-mentioned pseudo operation face in aforesaid operations person's a part.
The invention is characterized in; Also possesses content of operation decision unit; When the person's that reads the aforesaid operations in the zone of any the pseudo operation layer in the plural pseudo operation layer that the position between basis and above-mentioned pseudo operation face relation is confirmed action, this content of operation decision unit basis is allocated the content that operation species and the action of operator in this pseudo operation layer to this pseudo operation layer decide aforesaid operations in advance.
The invention is characterized in that the aforesaid operations face forms the unit and forms above-mentioned pseudo operation face in the corresponding position of positional information with aforesaid operations person's the upper part of the body.
The invention is characterized in that the aforesaid operations face forms above-mentioned pseudo operation face is adjusted in the unit according to the position of above-mentioned image-display units position and angle.
The present invention relates to a kind of operation judges method; Be used for image and decision operation content through the pattern recognition device person that comes the identifying operation; This operation judges method is characterised in that to possess following steps: the three-dimensional camera shooting step, and read operation person's image generates stereoscopic image data; Operating surface forms step; According to the operator's who reads by
Figure BDA0000135020060000051
three-dimensional camera shooting unit image, form pseudo operation face; The operation judges step; The image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And signal output step, when being judged as above-mentioned action and being operation, the signal of output regulation.
The present invention relates to a kind of program; Execution makes pattern recognition device identifying operation person's the image and the operation judges method of decision operation content; This program is characterised in that this operation judges method possesses following steps: the three-dimensional camera shooting step, and read operation person's image generates stereoscopic image data; Operating surface forms step, according to the operator's who is read by above-mentioned three-dimensional camera shooting unit image, forms pseudo operation face; The operation judges step; The image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And signal output step, when being judged as above-mentioned action and being operation, the signal of output regulation.
The effect of invention
The present invention possesses: the three-dimensional camera shooting unit, and its image that obtains the operator generates stereoscopic image data; Operating surface forms the unit, and its image according to the operator who is got access to by the three-dimensional camera shooting unit forms pseudo operation face; The operation judges unit; It utilizes the action of at least a portion image of the three-dimensional camera shooting unit person that comes the read operation with respect to formed this pseudo operation face, concerns according to operator's a part and the position between the pseudo operation face and judges whether this action is operation; And signal output unit; It is when operating when being judged as action; The signal of this signal output unit output regulation, thus, the operator does not need to be grasped specific positions also need be unfamiliar with operation; And the whole perhaps part of health is moved, just can exactly action be judged as the operation of expression operator will thus.
Description of drawings
Fig. 1 be the expression this embodiment the operation input system one the example figure.
Fig. 2 is the block diagram that schematically shows the operation input system and the relation between the computing machine of this embodiment.
Fig. 3 be the expression this embodiment the program of in the CPU of computing machine, handling functional module one the example block diagram.
Fig. 4 is the process flow diagram of the processing of this embodiment.
Fig. 5 be expression an embodiment of the invention related form the figure of the appearance of the virtual operating surface that benchmark forms according to operating surface.
Fig. 6 be expression an embodiment of the invention related form the figure of the appearance of the virtual operating surface that benchmark forms according to operating surface.
Fig. 7 be represent in the past use the 3D camera be taken into the figure of an example of the image under a plurality of operators' the situation of image.
Fig. 8 is the figure of an example of the auxiliary operating area setting of the related operation input of expression an embodiment of the invention.
Fig. 9 is the figure that an example of operating area is adjusted in the related position based on picture or camera of expression an embodiment of the invention.
Figure 10 is another routine figure that operating area is adjusted in the related position based on picture or camera of expression an embodiment of the invention.
Figure 11 is another routine figure that operating area is adjusted in the related position based on picture or camera of expression an embodiment of the invention.
Figure 12 is used to explain that the related position based on picture or camera of an embodiment of the invention adjusts the figure of the method for operating area.
Figure 13 be represent in the past use the 3D camera be taken into the figure of method of operator's image.
Figure 14 be expression an embodiment of the invention use based on the figure of an example of the operation input system of the pseudo operation face of mark.
Figure 15 is the figure of an example of the concrete operations of the related method of operation input of expression an embodiment of the invention.
Figure 16 is the figure that an example of operating area is adjusted in the related position based on picture or camera of expression an embodiment of the invention.
Figure 17 is the figure of an example of the auxiliary concrete demonstration of the related operation input of expression an embodiment of the invention.
Figure 18 is the figure of the appearance of related pseudo operation face of expression an embodiment of the invention and operating area.
Figure 19 is the related operator's of expression an embodiment of the invention action and the figure that is presented at the relation between the icon in the picture.
Figure 20 is the figure of an example of the concrete demonstration of the related operation of expression an embodiment of the invention input picture.
Figure 21 is the figure of the example of the related various icons that can in operation input picture, use of expression an embodiment of the invention.
Figure 22 is the related operator's of expression an embodiment of the invention action and the figure that is presented at the relation between the icon in the picture.
Figure 23 is the figure of appearance of change color of the menu button of the related operation of expression an embodiment of the invention input picture.
Figure 24 is the figure of appearance of deep or light variation of the menu button of the related operation of expression an embodiment of the invention input picture.
To be expression make the figure of the display frame of an example that is presented at the indication that the figure on the picture moves through this embodiment input to Figure 25.
Figure 26 is the related operator's of expression an embodiment of the invention action and the figure that is presented at the relation between the menu on the picture.
Figure 27 is the related operator's of expression another embodiment of the invention action and the figure that is presented at the relation between the menu on the picture.
Figure 28 is the related operator's of expression another embodiment of the present invention action and the figure that is presented at the relation between the menu on the picture.
Figure 29 is the figure that related pseudo operation face of expression an embodiment of the invention and operating surface form the appearance of benchmark.
Figure 30 is the figure that an example of operating area is adjusted in the related position according to the picture of projector or camera of expression an embodiment of the invention.
Figure 31 is the related operator's of expression an embodiment of the invention action and the figure that is presented at the relation between the menu on the picture.
Embodiment
Below, the embodiment that present invention will be described in detail with reference to the accompanying.
(first embodiment)
Fig. 1 be the expression this embodiment the operation input system one the example figure.The display 111 of this embodiment is configured in operator 102 front; Operator 102 consider and display 111 between the fixed position on have virtual operating surface; And the shape of recognizing finger etc. becomes the object of operation judges, can operate the operation input system.In display 111, show the various images that native system is used as the various application programs of target; But in addition; Can as after state non-productive operation input; The position etc. that promptly for example will become the operator 102 of object is presented at the corner of picture, makes operator 102 be identified in the action that current time can be judged as operation.Use 201 couples of operators' 102 of video camera action to take; Image through 110 pairs of shootings of computing machine obtain is handled; Position, height and brachium etc. according to operator 102; Perhaps according to body sizes information such as height, shoulder breadths, position and the size setting the best visual operating surface and comprise the operating area of this pseudo operation face judge which kind of operation is the posture of the part that 111 sides are stretched out from pseudo operation towards display be meant.Promptly; Computing machine 110 is according to the data that obtain from video camera 201; Make operator 102 stereo-picture, and the position of the pseudo operation face of calculating, and according to after video camera 201, the position of display 111, the configuration mode stated come the position of pseudo operation face and size etc. are adjusted; With pseudo operation face is whether the benchmark finger of confirming operator 102 etc. stretches out to video camera 201 sides, and this part is come the decision operation content as the object of operation.
In Fig. 1; In order to obtain image; The top that video camera 201 is installed on display 111 is taken, as long as but can as Fig. 8 to Figure 12, obtain required image, then be not limited thereto; Can also use infrared camera etc. at any image unit well known in the art, the position is set also can select near the optional position the display.At this, in this embodiment, use three-dimensional (perhaps 3D) camera as video camera 201, can make the stereo-picture that comprises the operator thus.
And, not shown voice outputs such as loudspeaker are installed in the system of this embodiment, can also through sound with displaying contents, with the operation relevant information convey to the operator.Through this function is set, not only in display with the mode display operation content of image, about indication item, result, also simultaneously distinguish pseudo operation face, even therefore vision has the operator of obstacle also can operate with sound transmission.
Fig. 5 and Fig. 6 are the figure that is used for explaining particularly the effect of pseudo operation face.The pseudo operation face of this embodiment 701 is to set according to body sizes information such as operator 102 height, brachium or height, shoulder breadths; User 102 is when oneself stretching one's arm naturally; Catch the operating surface 701 of virtual presence; Will carry out under the situation of various operations, can be that benchmark stretches out hand 601 forward and representes posture with operating surface 701.In addition; In comprising the operating area of pseudo operation face; The user adopts arbitrarily after the posture, can decide action with the behavior that (decision) pushed in the place ahead to pseudo operation face, perhaps is set at the benchmark of after the decision operation, pushing this judgement; Therefore the user is easy to identification, and operability is operated near touch panel in the past.On the other hand, compare with touch panel in the past, operation changes increases (two manual manipulations, action, a plurality of fingers etc.) overwhelmingly.
In this embodiment; When camera 201 captures operator 102 image; Form the pseudo operation face 701 of Fig. 5 and that kind shown in Figure 6 in real time, but before the operator began operation, operator's standing place was unfixing; Therefore pseudo operation face is uncertain, also is not easy to carry out operation judges.Therefore, in this embodiment, the moment of stationary time begins the setting processing of pseudo operation face in operator's the image pickup scope of health at three-dimensional camera.
The pseudo operation face of this embodiment can be formed so in real time, but in this case, also, operation judges can be more correctly carried out thus through someway operator's standing place being limited in the fixed range best for its system.For example, though not shown, can also on the floor, describe to represent the footprint of standing place, perhaps the configuration through display, system makes the operator discern the existence of limit of fixed scope or erect screen it is operated in fixed range.The position of the pseudo operation face that the operator can discern naturally, size etc. receive the very big influence of the position relation between operator and the display; Preferably suppose position of display, camera, operator etc. in advance in system in its entirety; Therefore through limiting like this, the operator can roughly infer and operates the existing position of pseudo operation face.
In addition, as shown in Figure 7, in this embodiment, there are a plurality of operand persons, promptly reading under a plurality of personages' the situation, with for example being in that foremost people 710 confirms as operator 102 and forming pseudo operation face wherein by camera 201.Certainly, can confirm in every way accordingly which of many philtrums is chosen as operator 102 with system, but operating area is not provided except top priority user, can prevent malfunction, mistake input (situation of single input) thus.
Fig. 2 is the block diagram of structure of computing machine 110 of the pattern recognition device of schematically illustrated embodiment.Display 701 is installed in computing machine 110, is connected with the video camera 201 that operator's 102 grades are taken, take the image that obtains and be taken into computing machine 110.Whether the image that in CPU 210, shooting is obtained carries out deciding the part of health to stretch out towards camera side from operating surface as calculating of the extraction of the image of the characteristic of this embodiment, position etc. according to the position that calculates.Computing machine 110 possesses CPU 210 usually, on RAM 212, carries out the program in ROM 211 grades that is stored in, and will output to display 111 etc. based on the resulting result of image from the pattern recognition device input.In this embodiment, display 111 is mainly used in the various images that various application programs that output will be experienced by the operator provide, but as after state such information of assisting that operation is imported that becomes that also shows.
Fig. 3 is the block diagram of an example of the functional module of the program handled in the CPU 210 of computing machine 110 of this embodiment of expression.As shown in Figure 3, carry out the processing in the native system through image reading unit 301, image extraction portion 302, picture position calculating part 303 and operation judges portion 304.In addition, in this embodiment, carry out from receiving through four modules, but be not limited thereto from the processing till the output of carrying out data of the image of video camera 201, can also use other module, or module still less handle.
(processing of this embodiment)
As shown in Figure 6; In this embodiment; Image according to taking the operator 102 who obtains by video camera 201 forms pseudo operation face; Confirm that a same part of taking the operator 102 who obtains is the position of hand, finger, and handle the position relation between the finger 601 of calculating virtual operating surface 701 and operator 102.In this embodiment; As the prerequisite of carrying out this processing; When imagination initial setting well known in the art, when for example having reset the situation of pattern recognition device of this embodiment, the information such as distance between the distortion that utilizes lens, display 111 and the lens of the video camera 201 that needs to utilize as preparing in advance are input to device.And, adjust threshold setting etc. in advance.When the initial setting of system finishes, carry out the processing of this embodiment, below, with reference to Fig. 4 this processing is described.
Fig. 4 is the process flow diagram of the processing of this embodiment.At first, in image reading unit 301, read by video camera 201 and take the data (S401) that obtain, use image extraction portion's 302 images (S402) from this extracting data operator.
The result of this preparation is, forms pseudo operation face and operating area (S403) according to the operator who extracts 102 image.At this, with reference to Fig. 8 etc., the shape of operating surface is the rectangle of vertically erectting from the floor, but is not limited thereto, and can form the operating surface of different shape, size according to operator's mode of operation.
At this; Operating area is meant the zone that the hand that becomes operating main body that comprises as the pseudo operation face of the characteristic of this embodiment and operator, finger etc. mainly move; As explanation in auxiliary till the pseudo operation face of stating after the arrival, the FX that surpasses pseudo operation face from operator's health is used to the operation of the application's invention and discerns.For example, as shown in Figure 8, for becoming human operator 810; Can consider height (position of sight line), brachium and form operating area 811 that kind; Under children operator 820 situation, height becomes lower and arm also shortens, therefore can be therewith setting operation zone 821 correspondingly.If in this operating area, set pseudo operation face, then through the movable naturally hand of operator, finger, the operation that can come decision operation person's intention to carry out according to the action of hand, finger.
More particularly; For example; Till can the degree of depth being made as the finger tip that the operator forwards reaches, width is made as till the length of the left and right sides wrist of operator when positive side reaches, highly being made as from the scope of operator's head position to waist location.In addition; Be made as under the situation from primary grades to the adult object person the system of this embodiment; The height amplitude is roughly about 100cm to 195cm, and is poor as its height, and the correction amplitude of the upper-lower position of operating area or pseudo operation face approximately needs 100cm.
In addition,, can carry out at every turn, also can under rigid condition, carry out about the setting of pseudo operation face, operating area, perhaps can also perhaps each in advance setting moment of selecting them.
Operation judges portion 304 is when the relativeness of utilizing between operation formed pseudo operation face of input system and the operator 102 (S404); Video camera 201 from operating surface; When a part of operator 102 is come nearby; Be judged as operation beginning (S405),, judge its shape, action are which kind of operation (S406) of imagination in advance according to shape (open palm or erect two fingers etc.), the action of each one.At this, which kind of which type of shape, action operate correspondingly with, can be determined independently by system, also can introduce the known any means in present technique field and decide.The result who judges is, carries out (S407) as the input that has this operation through computing machine 110, original not from pseudo operation under the situation that nearby side reaches, be judged as and do not operate and finish (S408).The judgement of content of operation is not limited to the method in this explanation, in this embodiment, can also use known any means.In addition; Also omitted concrete determination methods, but usually operators' such as predetermined posture body shape, action and this shape, the content of operation that action is meaned have been saved in database etc., after image extracts; This database is conducted interviews the decision operation content.At this moment, also can improve the judgement precision certainly through utilizing image recognition technology, artificial intelligence etc. in method well known in the art.
At this; Be appreciated that the operator for adult's situation under with the operator under children's the situation; Which position forming pseudo operation face with the size of which kind of degree in can change; But except the difference of operator's the bodily forms such as height, also need adjust pseudo operation face according to position, the position of display 111, the setting angle of camera 201.Usually, three-dimensional camera can be with respect to CCD, lens face is parallel or concentric circles ground carries out the measurement with respect to the distance of object.At the height of sighting line of display being arranged to the operator; Camera is in approaching position; And respectively with the vertically disposed situation in floor under; If the operator also is in stand up position, then generate after the proper handling zone, we can say does not need especially mutual position relation etc. to be adjusted, proofreaied and correct.But, under the situation of ceiling suspension type display, utilize under the situation of ultra-large type display or projector etc., and the position relation that camera is provided with between position, display and the operator can be supposed various situations.
Usually; The operator carries out input operation while observing the operand picture; Therefore the straight line that links up with sight line and operand picture with the operator all the time is the configuration virtual operating surface vertically, if do not generate the operating area along this pseudo operation face, then the operator is inconsistent along the angle generation of pushing stroke of Z direction; Even the operator carries out pressing operation to the point as target, also can't operate normally along certain angular deflection along with pushing.Thereby, under the situation that forms pseudo operation face, need perhaps according to circumstances adjust the position according to display, camera and operator's position, angle, the size that configuration mode adjustment will form.
With reference to Fig. 9; Such compounding practice person 820 as shown in Figure 8 confirms operating area 821 and pseudo operation face 601; But be configured at the such camera 201 of example as shown in Figure 9 under the situation on top of display 111; If pseudo operation face 601 is not vertical with the direction 910 that operator 820 makes a stretch of the arm, then operator 820 can not get the good operation sense to pseudo operation face, therefore needs to form the plane vertical with the visual field direction of camera 201.
In addition, with reference to Figure 10, display 111 itself is set at the top, and is mounted to angulation, so the perpendicular face of pseudo operation face 701 forms with upwards square neck is oblique direction 1010, makes operator 820 to look up and operation display 111.In this case; Also with the example shown in Fig. 9 likewise; The visual field 1011 and visual field direction 1010 shapes of camera 201 have a certain degree and tilt, and therefore need proofread and correct to make that the information that is read by camera 201 is consistent with the pseudo operation face 701 of inclination.And with reference to Figure 11, camera 201 is set near the floor of separating with display 111, and operator 820 visual field 1110 forms bigger angle with the visual field of camera 201, therefore need carry out the correction of respective amount.
Figure 12 is the figure that is used to explain an example of confirming pseudo operation face 701 and operating area 821.In this embodiment, in order to form pseudo operation face, use position, method to set up (carried out being provided with the angle of which kind of degree etc.), the operator's 820 of display 111 and camera 201 information such as standing place, height.That is, as an example, at first calculate operator 820 eyes with respect to the height (height) of display 111, from the standing place pseudo operation face 701 vertical with operator's sight line.Then, measure the angle of center line 1210 of the visual field of line A-B that head and health with operator 820 link up and camera 201, the inclination of correction pseudo operation face and operating area.The stroke of arm also can extract from operator's image, can also be according to the information of the height that obtains, also confirm the stroke of arm according to the average arm length information of each height.Perhaps, can also use with after the operating surface of second embodiment stated form the identical mark of benchmark etc., set the position, size, angle of pseudo operation face etc.For example; In the stage that system is set; To attach tagged, guide rod etc. and be placed on best position and use camera to take, pseudo operation face is set in the position of the mark that obtains according to these shootings, when reality is used; Can remove platform, the guide rod of initial setting, wait to proofread and correct according to operator's the bodily form to form pseudo operation face and operating area.
As stated; Pseudo operation face of this embodiment and operating area are according to camera, display and operator's position, configuration mode etc.; Be confirmed as operation and the easier operation judges that to carry out nature; Action to actual operator detects, and judges and is carrying out which kind of operation.But; In this not required processing during the installation of concrete processing, these embodiments such as judgment processing that whether a part for example from the image of three-dimensional camera, how to confirm position, shape or operator has passed through pseudo operation face of explanation, use any means well known in the art to accomplish.
(assisting of operation input)
As above state bright that kind; Only use three-dimensional camera to form pseudo operation face; The operator is the such operating surface of recognizing touch operation panel spatially just, through this operating surface is carried out various operations, can utilize whole perhaps parts of health to operate input; And wait the non-productive operation input through the operator is presented at display 111 with respect to the image of pseudo operation face, can more easily effectively utilize the system of this embodiment thus.
Figure 17 is the figure that auxiliary guide portion that expression will become this operation input is presented at an example of display 111.For example; With the pointer indicated number under the situation at certain position of the image of the central part of display 111; Thereby thrust out one's fingers to operator and the doubling of the image that shows pseudo operation face can indicative of desired the position; But pointer 901 such demonstrations as shown in Figure 17 are the appearance of indication like this, operator's current operation of carrying out of recognition and verification on one side thus, on one side the next operation of execution.Describe according to this example; For example; When operating surface thrusts out one's fingers, being presented on the picture; Pointer 901 withdrawal and disappear or deep or light variation shows that the operator can carry out the input method of this embodiment according to the action of hand and the appearance that is presented at the pointer in the display 111 with the mode of nature.Likewise will such mode be shown and represent that the operation screen 902 of operator's appearance itself is presented at the upper right corner of display 111 with the mode of dwindling with Fig. 5 and Fig. 6; Thereby can demonstrate and currently in system carry out which kind of action, be judged as which kind of operation; And; The expression action of hand is the graphical and broken line graph 903 that obtains, it is how to wait that operator consciousness front and back in one's hands itself are moved, and can expect to obtain more accurate operation thus.In addition, though not shown, can the posture that can be used in the system be presented in the guiding, impel the operator imitate this posture operate the input assist.
(operation of the side nearby of pseudo operation face is auxiliary)
In this embodiment; The operator is a benchmark with the pseudo operation face that spatially forms virtually; To operate as the mode that has the such input equipment of touch panel at this; Come to judge reliably its content of operation thus, but wait as the hand of operator's a part or finger reach pseudo operation face before, hand or finger beginning promptly come into play carrying out certain operation from the operator; Also operate during till the pseudo operation face of pushing auxiliaryly, can operate input more easily, more accurately thus.
Basically; The auxiliary principle of this operation is; The operator with respect to the position of pseudo operation face, for example with the ground of keeping strokes of position of hand or finger, visually which kind of operation the display operation person will carry out on display 111, can pilot operationp person carry out correct operation input thus.
With reference to Figure 18 and Figure 19 this point is described; In this embodiment, in advance by the operator under the situation of the fixing enterprising line operate in standing place and predefined, the position that is suitable for pseudo operation face being operated in this standing place, or the appropriate location of matching with operator's standing place form pseudo operation face 701.Likewise, shown in figure 18, setting operation person 820 suitable operating area 821.As stated, in display 111, represent current which kind of operation that to be ready carrying out, make the operator can discern the operation of oneself thus through variety of way.
With reference to Figure 19 one of this mode is described; To carry out under the situation of certain operation system the operator; In this example arm 2401 is carried out activity with respect to display 111 front and back, hand or point 601 position and change thus is therefore when its appearance is presented in the display 111; When the finger that stretches out 601 arrival fixed positions, fix processing thereby carry out the systems such as project that on the picture of display 111, indicate this moment.In the example of Figure 19; The size of icon changes owing to point 601 positions with respect to pseudo operation face 701 (degree of depth) difference; Then icon is more little near pseudo operation face more for finger 601, can make the operator discern the situation that focuses on fixing position through the operation of oneself.And,, confirm operation and execution processing accordingly therewith in the position that icon becomes minimum.
Figure 20 is that the result of the above operation of expression is the figure how icon changes on the picture 2501 of display 111.With reference to Figure 20, suppose in the picture 2501 of display 111 for example display of television programmes table, carry out the operation relevant with a certain program.In this state, for example will select the operator under the situation of menu button of " set change ", the operator will thrust out one's fingers 601 and select to display 111 as above-mentioned.In this embodiment, when finger 601 was close to fixed range with respect to pseudo operation face, display icon 2503 in picture 2501.The position of finger is also far away, so the bigger icon that is positioned at the right side in the icon shown in this icon display Figure 19.When the operator further makes a stretch of the arm 2401 the time; This icon is diminishing when " setting change " as the option of target; When the icon 2502 of fixed size, become special icon, when finger crosses pseudo operation face, be judged as the project of having selected indicating positions.
Like this; In the example of Figure 20; Position according to finger 601 changes the size that is presented at the icon in the picture 2501; How the operator's action that can grasp oneself is identified in system thus, discerns the position of pseudo operation face intuitively, thereby can carry out the operations such as selection of menu.At this, with operator's general image likewise can be whole and position, the size at each position through the operator who uses three-dimensional camera to extract to comprise finger 601, arm 2401.Thus, can grasp the depth of the object in the picture etc., therefore can according to these information calculate and pseudo operation face between distance, position relation.But the calculating of the three-dimensional camera that uses in this embodiment, the extraction of position, distance etc. can be used in any means well known in the art, therefore, omits its explanation at this.
At this, be presented at changing of rounded and the big or small and operator of icon on the picture with keeping strokes, but be not limited thereto, can that kind as shown in Figure 21 make icon in various manners and make it that various variations take place.That is, with reference to Figure 21, (1) is the icon of mode of finger, with the example of above-mentioned Figure 20 likewise, then more little near pseudo operation face more.(2) but expression is circular and diminish gradually and represent situation about being determined when input or selection are changed to special shape when being determined.Under the situation of this icon, other icon, can also replace the variation of shape, size, perhaps the variation with shape, size makes the color of icon change with matching.For example, make with blueness, green, yellow, redness etc. to be changed to warm colour system from cool colour system, thus the operator intuitively identifying operation be focused and situation about being determined.(3) be the such shape of X, be positioned under the situation at a distance, not only big and thicken, along with near and the size of icon diminishes, and fuzzy disappearance the and form tangible shape.(4) be that the whole size of icon does not change, become figure generation change in shape that is depicted in wherein and the appearance that is focused and discern.In this case, the color of figure is changed.(5) shown in Figure 21 also are the situation that shape is changed.In Figure 21; The shape, color etc. of icon are changed, and such instantaneous variation is the situation that different shape, color or flicker are discerned the operator to be judged as operation shown in hurdle 2601 when surpassing pseudo operation face.In addition, though not shown, as the variation of other icon, it also is effective that initial transparent finger then becomes opaque this variation near pseudo operation face more more.
At this; Shape changed especially and make under the situation that color, concentration changes, shown in figure 22, make when icon is moved finger 601 near the time; Color becomes warm colour system or thickens, thereby can confirm input.
In addition, in above example, for the judgement situation of confirming to operate; Display icon changes color, shape according to operator's action, still; Figure 23, shown in Figure 24 for example, original as menu in advance under the situation with the stationkeeping of indicating, even display icon specially not; Also according to which project button the most approaching decide of finger 601 indicated positions with menu; Change through making the color of filling or filling concentration, discern the position of pseudo operation face, thereby can easily operate input according to finger 601 action, the project button of particularly indicating with respect to the distance of pseudo operation face.Figure 23 is expression is changed to the example of warm colour system from cool colour system along with the color that makes selected this button near finger 601 figure.Selection as this routine color, when for example being made as (2) blueness, (3) green, (4) yellow, (5) redness, the operator can identify when becoming redness intuitively and be determined.Likewise, Figure 24 is the figure that expression makes the example that the filling concentration of button changes.
Selection example as same menu; Also there is the example shown in Figure 26; Describe at this, for example when the finger 601 of Figure 19 entered into nearby the FX of pseudo operation face 701, display menu 4301 on picture; When this finger 601 during, on the for example project 4302 of the menu shown in Figure 26, show large icons 2610 further near pseudo operation face 701.Afterwards, when finger 601 arrived pseudo operation face 701, the selection of project 4302 was determined and shows small icon 2611, notifies this situation.Afterwards, can also work as through make finger about 601 move up and down that the option that makes menu is moved and on the project of expecting stationary carry out handling accordingly during the time with item selected.In addition, can also work as and before carry out selecting, point 601 and when move at the rear of pseudo operation face 701 FX nearby, eliminate menu.In Figure 31, also with Figure 26 likewise, when finger 601 enters into nearby FX of pseudo operation face 701, display menu, still, at this example of controlling for video image.In this example, also with the example shown in Figure 26 likewise, can use large icons 3110 and small icon 3111 to carry out menu operation.
And, the example of other input operation is described with reference to Figure 25.Figure 25 is expression makes the display frame of an example that is presented at the indication that the figure in the picture moves through the input of this embodiment figure.Hand through making the operator or finger contact to move with pseudo operation face indicates.At first, illustrate when making finger etc., make near picture icon from the icon 4201 of picture 4211 be reduced into picture 4212 icon 4202 and near the situation of pseudo operation face.Afterwards; When contact pseudo operation face; As the icon 4203 of picture 4213, color is changed and place, the bungee 4204 of display frame 4214 waits and representes moving direction when moving up finger etc. in this state, and the operator can confirm the operation of oneself thus.In addition, when direction moveable finger to the right, bungee 4205 that can display frame 4215.Like this; Demonstration is according to the bungee (arrow in Figure 25) that pulls apart from stretch up and down (position of icon 4203 is fixed before finger shifts out pseudo operation face) after the arrival pseudo operation faces such as finger; Can translational speed be changed according to flexible distance, and make the direction that in 3d space, moves change (action that the arrow front end is followed arm, finger etc.) according to stretching angle.
More than; Roughly be in the situation that horizontal direction front equal height, that be pseudo operation face and operator is formed generally perpendicularly about operator shown in Figure 18 and display; The principle of this embodiment has been described; But this principle does not receive the position relation between this operator and the display, the influence of shape, and can have various configurations, structure.For example, can also use the configuration of the system shown in Figure 10 to Figure 12.In this case; Three-dimensional camera 201 also tilts with display 111; Therefore do not have very big-difference with the above-mentioned situation that is configured in horizontal level basically, camera is arranged at other position, carry out position correction etc. through any means well known in the art but be assumed to be; Also can calculate operator's position and the relation of the position between the pseudo operation face thus, thus decision operation.
(operation of the dark side of pseudo operation face-pseudo operation layer)
In this embodiment; The operator is a benchmark with the pseudo operation face that spatially forms virtually; To operate as the mode that has the such input equipment of touch panel at this; Come to judge reliably its content of operation thus,, decide the content of the operation of such judgement according to respect to the part of the pseudo operation face of the dark side direction of pseudo operation face, the direction promptly left from the operator and operator's healths such as hand or be worn on the position relation of the object on the health.For example, the direction of leaving from the operator be the z direction of principal axis set two-layer or three layers operating area as the pseudo operation layer, enter into the kind which layer decides operation according to operator's hand, decide content of operation according to the action of the hand in this layer.At this moment, if in the display frame of operator's visual identity, show the position of hand, the kind of operation etc., the then identification that can more easily operate of operator.The method of the distance between a pseudo operation face that in addition can be through calculating above-mentioned formation and operator's the part is obtained operator's a part and is cut apart the distance of the z direction between the face of each layer.
More specifically describe, the triggering face shown in Figure 27 701 is pseudo operation faces of this embodiment, any in utilizing above-mentioned embodiment and point 601 from triggering face 701 when the z direction of principal axis gets into, be judged as and operate.And the operating area that will trigger face 701 fronts through face 4501 and 4502 is cut apart these three layers of stratification A to C, thereby distributes the kind of different operation respectively.In the example of Figure 27, the rotary manipulation to layer A distributes object to the operation that layer B distributes amplification to dwindle, distributes the move operation of object to layer C.In each layer, carry out the operation that is determined through moveable finger 601.For example, in layer A, represent to point 601 icon during through triggering face 701, for example be center and the object of appointment rotates with the action of pointing 601 with matching with the position shown in the rotation icon 4503 at finger 601.In layer B, for example can in display 111, show to amplify and dwindle icon 4504, when will point 601 when the z direction moves object be exaggerated, object dwindles when mobile round about.
Likewise, the position of the finger 601 on the object of the appointment that in layer C, can in display 111, show shows moves icon 4505, and moves with the action of finger 601 with matching.At this, can the face between dividing layer 4501 and 4502 be configured to the identical thickness of each layer formation, can also be configured to different face between dividing layer 4501 and 4502 and thickness difference layer according to the operation species of distributing to layer.For example in the example of Figure 27, in layer B, be assigned with and amplify the operation of dwindling, dwindle but must show amplification through moving of front and back, therefore compare with layer A, a layer C, therefore moving greatly of z direction can also make thicker being easy to of layer B operate usually.
Figure 28 is the figure of example of other icon of this embodiment of expression.In the example shown in Figure 28, layer A distributed the operation of confirming the operating position on the display 111, the operation to layer B divides the determined locational object of pairing to carry out " seizure " distributes the operation that the object that captures is dished out or moved to layer C.
As stated; When the content of operation after being judged as operation through pseudo operation face was judged, the not only action through finger, hand can also be according to the position of its z direction, be that the pseudo operation layer is confirmed the kind of operating; Therefore in the action of finger, hand, prepare a plurality of various posture patterns; And the user needs to be grasped these, in contrast to this, can only come to use respectively complicated operations through simple action.
In addition; Above-mentioned, particularly in the example shown in Figure 27; Disposed the operation that can carry out a series of actions of hand, finger etc. at each interlayer continuously, but under the situation of the configuration that can't operate continuously (example shown in Figure 28), below two points become problem.That is, other layer was passed through in (1) before arriving as the pseudo operation layer of target, and had applied the unwanted indication of operator; And under (2) situation of hand being extracted out from operating area,, and applied the unwanted indication of operator through other pseudo operation layer finishing the purpose operation.For fear of the problems referred to above, for example consider following method etc.Promptly; Will with the operation the anti-hand of palmistry be inserted in the operating area many condition sensings (for example; Under the situation of operating with the right hand; Left hand is put into the state in the operating area) be set at and do not have operation the state of (perhaps it is opposite have operation), according to the extraction of the anti-hand of the palmistry of operation with put into the operation (, but also can consider on the XY plane, extraction to be set) that judges whether to carry out each layer with the whole bag of tricks such as zones in this example through two manual manipulations.
More than; When using this embodiment, the operator does not need to remember in advance or determine posture, just can move the operation of carrying out system through it; In addition; Can know operator's posture, the for example action of hand of each, therefore can also be used for having used the recreation of whole body, thereby realize mixed reality (MR).
(second embodiment)
The system architecture with above-mentioned first embodiment is identical basically except operating surface forms benchmark for this embodiment.Promptly; In this embodiment; According to the system and the processing of first embodiment, through import that kind shown in Figure 14 can also make the fixing mark 101 of operator's perception such be called as the notion that operating surface forms benchmark, the operator serves as a mark it and discerns pseudo operation face more easily.Promptly; Mark 101 shown in Figure 14 etc. is operating surface formation benchmark that operator 102 is used to the pseudo operation face of discerning; Shown in figure 16; User 102 catches the operating surface 701 that the top that is presented at the mark 101 on the floor exists virtually and carries out various operations, can be that benchmark stretches to the place ahead with hand 601 and representes posture with mark 101.Can also the transverse width of mark 101 be made as the width of operating surface.In addition; Can also use aid mark to wait the front and back of dividing mark 101, perhaps use aid mark to confirm operating area, perhaps as three-dimensional perspective (perspective) computational element; Shape, direction also are freely, can also represent the zone that is suitable for measuring.
In this operation input system that possesses mark 101; Shown in figure 16; Operating surface 701 is formed virtually at the top at mark 101; Operator 102 reaches 601 according to the virtual operating surface 701 of mark 101 imaginations, perhaps with the mobile linkedly hand 601 of display 111 and selecting a part and the operating surface 701 on the picture on the touch panel and touching, can easily carry out input operation thus.In addition; The user does after the free position in operating area, and the behavior of forwards pushing (decision) through the line segment formula decides action, perhaps is set at after the decision operation and pushes such judgment standard; Therefore the user is easy to identification, and operability is operated near touch panel in the past.
In this embodiment; With reference to Figure 16 etc.; Pseudo operation face is represented as directly over mark and is vertically formed; But under the situation of the configuration of the system shown in Fig. 9 to Figure 11, for example can be to have only the base of pseudo operation face to form benchmark to match and pseudo operation face integral inclination, perhaps make formed position change the position with matching with height with operating surface.In this case, for example, at first calculate fixing operating surface, proofread and correct according to operator's image afterwards, be adjusted into thus and form pseudo operation face in position according to mark 101.Perhaps; Calculate operating surface according to the position of the mark of measuring 101 and the position of predefined display 111 and camera 201; And can from operator's image, extract height, brachium etc., add these information and proofread and correct the position of pseudo operation face, size and angle etc.
In addition, the mark that forms benchmark as operating surface can visual identity, operator's visual identity mark and with it as benchmark, roughly estimate the existing position of pseudo operation face and operate.Thereby pseudo operation face need be formed at the top of mark, but from the operator, the front-back direction relation might change according to the situation of operator, entire system.Usually; Shown in figure 29; For example the situation of configuration flag 4401 etc. when being provided with according to the position of operator 102 eyes, considers that situation about standing near the position directly over the mark 4401 is more down on floor etc.; Therefore forming pseudo operation face 701 with respect to mark 4401 with 4402 places, position that operator's 102 opposite opposition sides leave a little, this is to consider to arm provides abundant motion space it can be operated naturally.On the other hand; Of the back; Under the situation of binding mark 1902 on the limit of the desk shown in Figure 15; Person's action that the restriction operation is come in the limit of the opposition side through having pasted mark, promptly can't make health near operating surface, can select suitably therefore that the width of desk is feasible to be operated easily in the front on this limit.In this case, consider pseudo operation face be formed at mark directly over can make the operator discern pseudo operation face more easily.In the first embodiment, the brachium of measure operator is set this front-back direction, if but adopt the mark that the operator can perception then also can form operating surface objectively.
Like this; The stroke of in above-mentioned first embodiment, considering arm is set the operating area that comprises pseudo operation face; But through mark is carried out various combinations, can be more objectively, promptly can both observe and carry out identification mode with fixing degree of accuracy and confirm operating area with eyes with any operator.
And, form benchmark as the operating surface of this embodiment, can be on the picture that captures decentralized configuration measurement markers on a large scale, suitably and simultaneously, therefore can carry out the very high measurement of reliability.In addition; Except this effect, can also be with guaranteeing that the calibration system that mark is in the situation in the camera coverage of camera all the time uses, thus can realize saving space, multi-functional device; Basically after the calibration of initial setting up, do not need to measure once more at every turn.
Such as stated, mark 101 is taken by video camera 201 and is become operating surface formation benchmark, and in order to make it easier, marker material can use various materials well known in the art.But, select suitable material according to employed camera usually.For example, under the situation of camera, need under the situation of using infrared camera, can use retroreflection material etc. usually from the outstanding characteristic colouring of background color.On the other hand; Laser is not easy to measure reflected light under black part is graded the situation that reflects less color, material; Therefore usage flag, retroreflection material etc. do not use black bar etc. through laser; Thus the part of laser radiation do not reflect and produce on the picture damaged, position that therefore so also can detector bar.
For example, coming under the situation of additional marking, specifically, can as following, handle and extract mark through fixing colouring.In image reading unit 301, read by video camera 201 and take the data that obtain; Image from this extracting data mark 101; For example under the situation of coloured image, select by image extraction portion 302 and to be predetermined the image that only extracts mark 101 for the color region of mark.Specifically; In this embodiment; Brightness signal Y and colour difference signal U, middle separately setting of V to colored NTSC signal are gone up lower threshold value and are extracted the pixel that satisfies whole threshold values, but are not limited thereto, and can also use any means well known in the art.Like this, grasp the position of mark 101, which type of operating surface calculates virtual operating surface becomes and stores database into three-dimensionally.When colour extracts end; And under having the situation of aid mark, also carry out after same processing extracts aid mark; Carry out the black and white binaryzation through 303 pairs of mark parts that extract of picture position calculating part, calculate formation and take the pixel count on the limit in length and breadth of the mark that extracts the image that obtains from video camera 201.With edge lengths in length and breadth, the inclination of the image that gets access to, compare inclination, the scale of calculating the shooting space with the image that becomes benchmark.In this embodiment, under the situation of calculating inclination, scale, can also mark be set and be made as benchmark at the position more than 4 o'clock at least.For example, if having the reference point more than 4 then these reference points are linked and are made as line segment, thereby can calibrate.
Such as stated, can suitable material be pasted the use that serves as a mark on the floor, but be not limited thereto, can be applied directly to and perhaps use adherence method arbitrarily well known in the art on the floor.In addition, in above-mentioned explanation, form benchmark usage flag 101 as operating surface, but be not limited thereto, can be with parts, structure are used for the measurement in space benchmark arbitrarily.For example, mark is not only the shape shown in Fig. 1, can also be made as the figure of different shape, also a plurality of marks with fixed-area can be set on a plurality of points.
In addition; Form benchmark as operating surface; Can also be on the three-dimensional thing 1901 of three-dimensional thing, routine desk shape as shown in Figure 15 additional marking 1902,1903 and be made as operating surface and form benchmark and form pseudo operation face 701 thus; For example use 601 pairs of these pseudo operation faces 701 of finger to operate, can carry out input operation thus.In addition, when with reference to Figure 16, the shape of pseudo operation face forms the rectangle of vertically erectting from the floor, but is not limited thereto, and can form the operating surface of different shape, size according to shape, the collocation method of mark 101.Example mark 101 as shown in Figure 16 is the straight line of the regular length parallel with the face of display 111; Therefore virtual operating surface becomes the shape of operating surface 701; But can also mark be made as the straight line that is formed obliquely fixed angle; In this case, shape is identical with the operating surface 701 shown in Figure 16, is the mark of placing obliquely with display 111 formation fixed angles but form direction.Therefore in this case, operator 102 also can capture the operating surface that forms through mark is disposed obliquely virtually, recognizes this operating surface and operates and get final product.In addition, dispose aid mark three-dimensionally, can also be made as with respect to the floor also form fixed angle the inclined-plane operating surface or be made as the operating surface of curved surface.In addition, in this embodiment, explained with pseudo operation face to be that benchmark is handled according to formation such as marks, but present technique field personnel can understand, in actual calculation is handled, also can calculate operator's position according to operating surface.This be because, be to recognize virtual operating surface and operate input all the time by the operator.
In addition; Under the situation of the mounting table that uses the tape label shown in Figure 15; After state shown in Figure 180, for example usage flag 1903 only with operator 102 the upper part of the body as subject area 2301, can also be only the action of the part of stretching out to the front from pseudo operation face 701 be judged as operation.Through being made as this structure; Mounting table shown in Figure 15 is made as the supporting of health and carries out under the situation of input operation the operator; Even the lower part of the body, particularly foot towards preceding stretching out, also can only be operation with action recognition above the waist from pseudo operation all the time.
Using operating surface to form under the situation that benchmark forms pseudo operation face etc.; Can also remove three-dimensional mark after setting reference field and measured zone measuring through three-dimensional mark; Marks such as line only are set afterwards, so that can judge the reference field that is generated on the floor.For example, in the improper environment that three-dimensional mark is set all the time such as the narrow space that can't set up three-dimensional guide rod etc., the formation method of this pseudo operation face is more effective.In addition; Under the situation of floor level calibrating pattern; Although can since the 3D camera being provided with angle different and than stereo calibration more difficultly measure, or when utilizing the floor, can be difficult to select the to possess good adaptability material of (wear-resistant, slide prevent etc.), the simple floor calibration meeting that replaces with no adjusting machine is implemented more easily.In addition, with above-mentioned likewise measure after, replace with the do not have calibration function solid guiding of (mark).Replace with and pay attention to design or cheap type can be more effective, the user still can visual identity.
Above-mentioned any means all become calibrate the back user can its position of visual identity and replace with the method on other unit relevant (three-dimensional perhaps plane) with moving restriction.And, be not limited to only carry out calibration steps, and can in advance reference field be set in wieldy distance, position with the photograph pusher side, afterwards, floor or three-dimensional guiding are set afterwards on this face (zone), make user side to discern.
At this, mark and the relation between the pseudo operation face in this embodiment are described with reference to Figure 29.In this embodiment, basically at additional marking on the limit of desk, platform etc. and the operator to be formed at mark above pseudo operation face contact or mobile hand makes the situation of system identification as input operation.At this moment, the margin system operator's who does not have additional marking of desk, platform action, the moderately auxiliary hand that lifts contacts pseudo operation face naturally.When this notion being described with reference to Figure 29; Above the mark 4402 that forms the unit as operating surface, form pseudo operation face 701; But; On the other hand, make operator 102 and pseudo operation face keep fixed range, operator 102 is operated through the 601 pairs of pseudo operation faces of hand that stretch out forward naturally through motion limits unit 4401 arbitrarily.In this embodiment, pseudo operation face 701 be formed at mark 4402 directly over, but first embodiment explanation that kind can also be that benchmark moves forward and backward with mark 4402.For example; Motion limits unit 4401 is fixed basically; Therefore when the bodily form according to operator 102 forms pseudo operation face directly over mark 4402; Might become near or cross far causes the usability variation, in this case, can according to mark 4402 position that forms pseudo operation face moved forward and backward according to each operator.
More than; In this embodiment; According to use three-dimensional camera to the operator can perception operating surface form benchmark and operator itself and take the image that obtains and form pseudo operation face; Therefore be easy to confirm objectively the position of pseudo operation face, also take operator's height etc. into consideration, so the operator can access the natural operation feeling that does not have sense of discomfort.
(the 3rd embodiment)
This embodiment system architecture with above-mentioned first and second embodiments basically is identical, and the replacement display uses and use the projector this point different with above-mentioned first and second embodiments as demonstration.Promptly; In this embodiment, it handles with above-mentioned first and second embodiments basic identical, but replaces displays 111 such as LCD, plasma; And that kind shown in figure 30 projects to screen 3010 with image from projector 3011, notifies various information to the operator thus.In the system of this embodiment; In first embodiment etc., disposed on the display surface of LCD etc. and only disposed screen, therefore that kind shown in figure 30 can and be made as one-piece type to the computing machine that they are controlled with projector 3011, the camera 201 of projection image.This one-piece type system is placed between operator and the screen usually, and therefore for example diagram is such, in order to discern the entering prohibited area, is placed with guide rod 3012, can also this guide rod 3012 be borrowed in the such operating surface of second embodiment and form benchmark.
In this embodiment, only display packing is different with first embodiment, and the difference of display surface own is little, so the setting of pseudo operation face and operating area and the judgment processing of operation etc. are identical with first and second embodiments basically.But; As stated; Projector, camera and computing machine are integrally formed type, be configured between operator and the display surface (screen 3010), so some difference of the position of camera 201; The situation of first embodiment etc. that is set at the bottom etc. of display surface with camera is compared, and it is big that the adjusting range of the angle of operating area etc. becomes.In addition, the illustrated situation of the position relation between guide rod 3012 and the pseudo operation face 701 and second embodiment is different, is not limited to directly over guide rod 3012, form pseudo operation face 701.This be because; For example perceive when on the floor, describing shown in figure 14 fixing mark 101 consciously as the operator; The guide rod 3012 that gets into this embodiment prevent is identical as the effect that operating surface forms benchmark with being used for, but the position relation between the position basis that forms pseudo operation face and the operator is different and different with relation between the operator.Can use any knowledge well known in the art, according to system with guide rod 3012 be benchmark in the distal side or nearby side form pseudo operation face.
More than; In this embodiment,, can be made as projector, camera and computing machine one-piece type through projector being used to show usefulness; Therefore setting, processing become easy; And under picture is large-scale situation, compare with using large LCD, its to easy property is set, expense is more favourable.

Claims (13)

1. pattern recognition device is characterized in that possessing:
The three-dimensional camera shooting unit, its image that obtains the operator generates stereoscopic image data;
Operating surface forms the unit, and it forms pseudo operation face according to the operator's who is got access to by above-mentioned three-dimensional camera shooting unit image;
The operation judges unit; Its image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And
Signal output unit is when operation when being judged as above-mentioned action, the signal of this signal output unit output regulation.
2. pattern recognition device according to claim 1 is characterized in that,
The aforesaid operations judging unit is positioned in aforesaid operations person's a part that to be judged as above-mentioned action when more leaning on the position of above-mentioned three-dimensional camera shooting cell side than above-mentioned pseudo operation face be operation.
3. pattern recognition device according to claim 1 and 2 is characterized in that,
The aforesaid operations judging unit is positioned at according to aforesaid operations person's a part and more leans on shape or the action of part of the position of above-mentioned three-dimensional camera shooting cell side to judge than above-mentioned pseudo operation face to carry out which kind of operation.
4. pattern recognition device according to claim 3 is characterized in that,
The aforesaid operations judging unit will be the operation of being imported with the shape of coupling or the corresponding operation judges of moving preserving with the shape of operator's a part in advance or moving and search in the storage unit of corresponding content of operation.
5. according to each the described pattern recognition device in the claim 1~4, it is characterized in that,
Also possess the image-display units that the mode with the oriented manipulation person disposes,
The aforesaid operations judging unit so that the operator can identifying operation the mode of judged result, make the operation judges result of current time be shown in above-mentioned image-display units.
6. according to each the described pattern recognition device in the claim 1~4, it is characterized in that,
Also possess the image-display units that the mode with the oriented manipulation person disposes,
When the person's that reads the aforesaid operations in the zone of above-mentioned pseudo operation layer action, in above-mentioned image-display units, show and allocate sign in advance to this pseudo operation layer.
7. according to each the described pattern recognition device in the claim 1~4, it is characterized in that,
Also possesses the image-display units that can carry out visual observation by the aforesaid operations person; This image-display units is according to concerning in the aforesaid operations person's of the opposition side of above-mentioned three-dimensional camera shooting unit a part and the position between the above-mentioned pseudo operation face and calculate corresponding distance with respect to being formed pseudo operation face that the unit forms by the aforesaid operations face; Show the sign that correspondingly changes with this distance, thereby show the operation that to judge.
8. pattern recognition device according to claim 7 is characterized in that,
Be positioned at when more leaning on the position of above-mentioned three-dimensional camera shooting cell side than above-mentioned pseudo operation face in aforesaid operations person's a part, above-mentioned image-display units stops the variation of this sign and shows the operation of judgement.
9. according to each the described pattern recognition device in the claim 1~8, it is characterized in that,
Also possesses content of operation decision unit; When the person's that reads the aforesaid operations in the zone of any the pseudo operation layer in the plural pseudo operation layer that the position between basis and above-mentioned pseudo operation face relation is confirmed action, this content of operation decision unit basis is allocated the content that operation species and the action of operator in this pseudo operation layer to this pseudo operation layer decide aforesaid operations in advance.
10. according to each the described pattern recognition device in the claim 1~9, it is characterized in that,
The aforesaid operations face forms the unit and forms above-mentioned pseudo operation face in the corresponding position of positional information with aforesaid operations person's the upper part of the body.
11. each the described pattern recognition device according in the claim 1~10 is characterized in that,
The aforesaid operations face forms above-mentioned pseudo operation face is adjusted in the unit according to the position of above-mentioned image-display units position and angle.
12. an operation judges method is used for image and decision operation content through the pattern recognition device person that comes the identifying operation, this operation judges method is characterised in that to possess following steps:
The three-dimensional camera shooting step, read operation person's image generates stereoscopic image data;
Operating surface forms step, according to the operator's who is read by above-mentioned three-dimensional camera shooting unit image, forms pseudo operation face;
The operation judges step; The image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And
Signal output step, when being judged as above-mentioned action and being operation, the signal of output regulation.
13. a program is carried out and made pattern recognition device identifying operation person's the image and the operation judges method of decision operation content, this program is characterised in that this operation judges method possesses following steps:
The three-dimensional camera shooting step, read operation person's image generates stereoscopic image data;
Operating surface forms step, according to the operator's who is read by above-mentioned three-dimensional camera shooting unit image, forms pseudo operation face;
The operation judges step; The image of at least a portion that utilizes the above-mentioned three-dimensional camera shooting unit person that comes the read operation is with respect to the action of formed this pseudo operation face, concerns according to aforesaid operations person's a part and the position between the above-mentioned pseudo operation face and judges whether this action is operation; And
Signal output step, when being judged as above-mentioned action and being operation, the signal of output regulation.
CN201080035693.8A 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program Expired - Fee Related CN102473041B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2009187449A JP4701424B2 (en) 2009-08-12 2009-08-12 Image recognition apparatus, operation determination method, and program
JP2009-187449 2009-08-12
PCT/JP2010/005058 WO2011018901A1 (en) 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201510015361.8A Division CN104615242A (en) 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program

Publications (2)

Publication Number Publication Date
CN102473041A true CN102473041A (en) 2012-05-23
CN102473041B CN102473041B (en) 2015-01-07

Family

ID=43586084

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201080035693.8A Expired - Fee Related CN102473041B (en) 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program
CN201510015361.8A Pending CN104615242A (en) 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201510015361.8A Pending CN104615242A (en) 2009-08-12 2010-08-12 Image recognition device, operation determination method, and program

Country Status (7)

Country Link
US (2) US8890809B2 (en)
EP (1) EP2466423B1 (en)
JP (1) JP4701424B2 (en)
KR (1) KR101347232B1 (en)
CN (2) CN102473041B (en)
CA (2) CA2886208A1 (en)
WO (1) WO2011018901A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713823A (en) * 2013-12-30 2014-04-09 深圳泰山在线科技有限公司 Method and system for updating operation area position in real time
CN104238737A (en) * 2013-06-05 2014-12-24 佳能株式会社 Information processing apparatus capable of recognizing user operation and method for controlling the same
CN104750243A (en) * 2013-12-27 2015-07-01 日立麦克赛尔株式会社 Image projection device
CN107015644A (en) * 2017-03-22 2017-08-04 腾讯科技(深圳)有限公司 Virtual scene middle reaches target position adjustments method and device
CN107608515A (en) * 2012-11-21 2018-01-19 英飞凌科技股份有限公司 The dynamic of imaging power is saved
CN107861403A (en) * 2017-09-19 2018-03-30 珠海格力电器股份有限公司 Press key locking control method, device, storage medium and the electrical equipment of a kind of electrical equipment
CN113569635A (en) * 2021-06-22 2021-10-29 惠州越登智能科技有限公司 Gesture recognition method and system

Families Citing this family (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4701424B2 (en) * 2009-08-12 2011-06-15 島根県 Image recognition apparatus, operation determination method, and program
US8639020B1 (en) 2010-06-16 2014-01-28 Intel Corporation Method and system for modeling subjects from a depth map
GB2488784A (en) * 2011-03-07 2012-09-12 Sharp Kk A method for user interaction of the device in which a template is generated from an object
GB2488785A (en) * 2011-03-07 2012-09-12 Sharp Kk A method of user interaction with a device in which a cursor position is calculated using information from tracking part of the user (face) and an object
JP5864043B2 (en) * 2011-04-12 2016-02-17 シャープ株式会社 Display device, operation input method, operation input program, and recording medium
JP2012252386A (en) * 2011-05-31 2012-12-20 Ntt Docomo Inc Display device
US11048333B2 (en) 2011-06-23 2021-06-29 Intel Corporation System and method for close-range movement tracking
JP6074170B2 (en) 2011-06-23 2017-02-01 インテル・コーポレーション Short range motion tracking system and method
JP5921835B2 (en) * 2011-08-23 2016-05-24 日立マクセル株式会社 Input device
JP5581292B2 (en) * 2011-09-30 2014-08-27 楽天株式会社 SEARCH DEVICE, SEARCH METHOD, RECORDING MEDIUM, AND PROGRAM
EP2782328A4 (en) * 2011-12-16 2015-03-11 Olympus Imaging Corp Imaging device and imaging method, and storage medium for storing tracking program processable by computer
US9684379B2 (en) * 2011-12-23 2017-06-20 Intel Corporation Computing system utilizing coordinated two-hand command gestures
WO2013095678A1 (en) 2011-12-23 2013-06-27 Intel Corporation Mechanism to provide feedback regarding computing system command gestures
WO2013095677A1 (en) 2011-12-23 2013-06-27 Intel Corporation Computing system utilizing three-dimensional manipulation command gestures
WO2013095671A1 (en) 2011-12-23 2013-06-27 Intel Corporation Transition mechanism for computing system utilizing user sensing
JP2013132371A (en) * 2011-12-26 2013-07-08 Denso Corp Motion detection apparatus
JP2013134549A (en) * 2011-12-26 2013-07-08 Sharp Corp Data input device and data input method
EP2610714B1 (en) * 2012-01-02 2016-08-10 Alcatel Lucent Depth camera enabled pointing behavior
US9222767B2 (en) 2012-01-03 2015-12-29 Samsung Electronics Co., Ltd. Display apparatus and method for estimating depth
JP5586641B2 (en) * 2012-02-24 2014-09-10 東芝テック株式会社 Product reading apparatus and product reading program
US20130239041A1 (en) * 2012-03-06 2013-09-12 Sony Corporation Gesture control techniques for use with displayed virtual keyboards
US9477303B2 (en) 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
KR101424562B1 (en) * 2012-06-11 2014-07-31 한국과학기술원 Space sensing device, method of operating the same, and system including the same
JP2014002502A (en) * 2012-06-18 2014-01-09 Dainippon Printing Co Ltd Stretched-out hand detector, stretched-out hand detecting method and program
JP5654526B2 (en) * 2012-06-19 2015-01-14 株式会社東芝 Information processing apparatus, calibration method, and program
JP2014029656A (en) * 2012-06-27 2014-02-13 Soka Univ Image processor and image processing method
JP5921981B2 (en) * 2012-07-25 2016-05-24 日立マクセル株式会社 Video display device and video display method
US20140123077A1 (en) * 2012-10-29 2014-05-01 Intel Corporation System and method for user interaction and control of electronic devices
US10186057B2 (en) 2012-11-22 2019-01-22 Sharp Kabushiki Kaisha Data input device, data input method, and non-transitory computer readable recording medium storing data input program
JP5950806B2 (en) * 2012-12-06 2016-07-13 三菱電機株式会社 Input device, information processing method, and information processing program
US20140340498A1 (en) * 2012-12-20 2014-11-20 Google Inc. Using distance between objects in touchless gestural interfaces
JP6167529B2 (en) * 2013-01-16 2017-07-26 株式会社リコー Image projection apparatus, image projection system, control method, and program
JP6029478B2 (en) * 2013-01-30 2016-11-24 三菱電機株式会社 Input device, information processing method, and information processing program
JP5950845B2 (en) * 2013-02-07 2016-07-13 三菱電機株式会社 Input device, information processing method, and information processing program
JP2018088259A (en) * 2013-03-05 2018-06-07 株式会社リコー Image projection device, system, image projection method, and program
US9519351B2 (en) * 2013-03-08 2016-12-13 Google Inc. Providing a gesture-based interface
JP6044426B2 (en) * 2013-04-02 2016-12-14 富士通株式会社 Information operation display system, display program, and display method
WO2015002420A1 (en) * 2013-07-02 2015-01-08 (주) 리얼밸류 Portable terminal control method, recording medium having saved thereon program for implementing same, application distribution server, and portable terminal
WO2015002421A1 (en) * 2013-07-02 2015-01-08 (주) 리얼밸류 Portable terminal control method, recording medium having saved thereon program for implementing same, application distribution server, and portable terminal
JP6248462B2 (en) * 2013-08-08 2017-12-20 富士ゼロックス株式会社 Information processing apparatus and program
KR102166330B1 (en) * 2013-08-23 2020-10-15 삼성메디슨 주식회사 Method and apparatus for providing user interface of medical diagnostic apparatus
JP6213193B2 (en) * 2013-11-29 2017-10-18 富士通株式会社 Operation determination method and operation determination apparatus
EP2916209B1 (en) * 2014-03-03 2019-11-20 Nokia Technologies Oy Input axis between an apparatus and a separate apparatus
US20150323999A1 (en) * 2014-05-12 2015-11-12 Shimane Prefectural Government Information input device and information input method
KR101601951B1 (en) * 2014-09-29 2016-03-09 주식회사 토비스 Curved Display for Performing Air Touch Input
KR20170101769A (en) 2014-12-26 2017-09-06 가부시키가이샤 니콘 Detection device and program
EP3239818A4 (en) 2014-12-26 2018-07-11 Nikon Corporation Control device, electronic instrument, control method, and program
CN106062683B (en) 2014-12-26 2021-01-08 株式会社尼康 Detection device, electronic apparatus, detection method, and program
US9984519B2 (en) 2015-04-10 2018-05-29 Google Llc Method and system for optical user recognition
CN104765459B (en) * 2015-04-23 2018-02-06 无锡天脉聚源传媒科技有限公司 The implementation method and device of pseudo operation
CN104866096B (en) * 2015-05-18 2018-01-05 中国科学院软件研究所 A kind of method for carrying out command selection using upper arm stretching, extension information
CN104978033A (en) * 2015-07-08 2015-10-14 北京百马科技有限公司 Human-computer interaction device
KR101685523B1 (en) * 2015-10-14 2016-12-14 세종대학교산학협력단 Nui/nux of virtual monitor concept using concentration indicator and user's physical features
KR101717375B1 (en) * 2015-10-21 2017-03-17 세종대학교산학협력단 Game interface using hand-mouse based on virtual monitor
US10216405B2 (en) * 2015-10-24 2019-02-26 Microsoft Technology Licensing, Llc Presenting control interface based on multi-input command
CN105404384A (en) * 2015-11-02 2016-03-16 深圳奥比中光科技有限公司 Gesture operation method, method for positioning screen cursor by gesture, and gesture system
US10610133B2 (en) 2015-11-05 2020-04-07 Google Llc Using active IR sensor to monitor sleep
JP6569496B2 (en) * 2015-11-26 2019-09-04 富士通株式会社 Input device, input method, and program
US10289206B2 (en) * 2015-12-18 2019-05-14 Intel Corporation Free-form drawing and health applications
JP6733731B2 (en) * 2016-06-28 2020-08-05 株式会社ニコン Control device, program and control method
JP6230666B2 (en) * 2016-06-30 2017-11-15 シャープ株式会社 Data input device, data input method, and data input program
US20180024623A1 (en) * 2016-07-22 2018-01-25 Google Inc. Detecting user range of motion for virtual reality user interfaces
WO2018083737A1 (en) * 2016-11-01 2018-05-11 マクセル株式会社 Display device and remote operation controller
JP6246310B1 (en) 2016-12-22 2017-12-13 株式会社コロプラ Method, program, and apparatus for providing virtual space
CN108345377A (en) * 2017-01-25 2018-07-31 武汉仁光科技有限公司 A kind of exchange method of the adaptive user height based on Kinect
KR101821522B1 (en) * 2017-02-08 2018-03-08 윤일식 Apparatus and Method for controlling the Motion of an Elevator using a Monitor
KR102610690B1 (en) * 2017-02-24 2023-12-07 한국전자통신연구원 Apparatus for expressing user input using outline and trajectory representation and method for the same
KR101968547B1 (en) * 2017-07-17 2019-04-12 주식회사 브이터치 Method, system and non-transitory computer-readable recording medium for assisting object control
WO2019026713A1 (en) * 2017-08-04 2019-02-07 ソニー株式会社 Information processing device, information processing method, and program
JP7017675B2 (en) * 2018-02-15 2022-02-09 有限会社ワタナベエレクトロニクス Contactless input system, method and program
US20200012350A1 (en) * 2018-07-08 2020-01-09 Youspace, Inc. Systems and methods for refined gesture recognition
JP7058198B2 (en) * 2018-08-21 2022-04-21 グリー株式会社 Image display system, image display method and image display program
JP7447302B2 (en) * 2020-03-23 2024-03-11 華為技術有限公司 Method and system for hand gesture-based control of devices
CN111880657B (en) * 2020-07-30 2023-04-11 北京市商汤科技开发有限公司 Control method and device of virtual object, electronic equipment and storage medium
JP7041211B2 (en) * 2020-08-03 2022-03-23 パラマウントベッド株式会社 Image display control device, image display system and program
JP2022163813A (en) * 2021-04-15 2022-10-27 キヤノン株式会社 Wearable information terminal, control method for the same, and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0612177A (en) * 1992-06-29 1994-01-21 Canon Inc Information inputting method and device therefor
JP2004013314A (en) * 2002-06-04 2004-01-15 Fuji Xerox Co Ltd Position measuring input support device
JP2006209359A (en) * 2005-01-26 2006-08-10 Takenaka Komuten Co Ltd Apparatus, method and program for recognizing indicating action
US20090183125A1 (en) * 2008-01-14 2009-07-16 Prime Sense Ltd. Three-dimensional user interface

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3749369B2 (en) * 1997-03-21 2006-02-22 株式会社竹中工務店 Hand pointing device
JP3795647B2 (en) * 1997-10-29 2006-07-12 株式会社竹中工務店 Hand pointing device
US6064354A (en) * 1998-07-01 2000-05-16 Deluca; Michael Joseph Stereoscopic user interface method and apparatus
JP2001236179A (en) * 2000-02-22 2001-08-31 Seiko Epson Corp System and method for detecting indication position, presentation system and information storage medium
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US6911995B2 (en) * 2001-08-17 2005-06-28 Mitsubishi Electric Research Labs, Inc. Computer vision depth segmentation using virtual surface
JP4974319B2 (en) * 2001-09-10 2012-07-11 株式会社バンダイナムコゲームス Image generation system, program, and information storage medium
JP4286556B2 (en) * 2003-02-24 2009-07-01 株式会社東芝 Image display device
JP2004078977A (en) 2003-09-19 2004-03-11 Matsushita Electric Ind Co Ltd Interface device
HU0401034D0 (en) * 2004-05-24 2004-08-30 Ratai Daniel System of three dimension induting computer technology, and method of executing spatial processes
EP1769328A2 (en) * 2004-06-29 2007-04-04 Koninklijke Philips Electronics N.V. Zooming in 3-d touch interaction
CN101308442B (en) * 2004-10-12 2012-04-04 日本电信电话株式会社 3d pointing method and 3d pointing device
US8614676B2 (en) * 2007-04-24 2013-12-24 Kuo-Ching Chiang User motion detection mouse for electronic device
US20060267927A1 (en) * 2005-05-27 2006-11-30 Crenshaw James E User interface controller method and apparatus for a handheld electronic device
ITUD20050152A1 (en) * 2005-09-23 2007-03-24 Neuricam Spa ELECTRO-OPTICAL DEVICE FOR THE COUNTING OF PEOPLE, OR OTHERWISE, BASED ON STEREOSCOPIC VISION, AND ITS PROCEDURE
US8217895B2 (en) * 2006-04-28 2012-07-10 Mtekvision Co., Ltd. Non-contact selection device
CN200947919Y (en) * 2006-08-23 2007-09-19 陈朝龙 Supporting structure assisting the operation of mouse
JP4481280B2 (en) * 2006-08-30 2010-06-16 富士フイルム株式会社 Image processing apparatus and image processing method
US8354997B2 (en) * 2006-10-31 2013-01-15 Navisense Touchless user interface for a mobile device
KR100851977B1 (en) * 2006-11-20 2008-08-12 삼성전자주식회사 Controlling Method and apparatus for User Interface of electronic machine using Virtual plane.
KR100827243B1 (en) * 2006-12-18 2008-05-07 삼성전자주식회사 Information input device and method for inputting information in 3d space
CN101064076A (en) * 2007-04-25 2007-10-31 上海大学 Distant view orienting enquiring display apparatus and method
WO2008149860A1 (en) * 2007-06-04 2008-12-11 Shimane Prefectural Government Information inputting device, and information outputting device and method
JP4318056B1 (en) * 2008-06-03 2009-08-19 島根県 Image recognition apparatus and operation determination method
US8325181B1 (en) * 2009-04-01 2012-12-04 Perceptive Pixel Inc. Constraining motion in 2D and 3D manipulation
US20100315413A1 (en) * 2009-06-16 2010-12-16 Microsoft Corporation Surface Computer User Interaction
JP4701424B2 (en) * 2009-08-12 2011-06-15 島根県 Image recognition apparatus, operation determination method, and program
US8261211B2 (en) * 2009-10-01 2012-09-04 Microsoft Corporation Monitoring pointer trajectory and modifying display interface
US20120056989A1 (en) * 2010-09-06 2012-03-08 Shimane Prefectural Government Image recognition apparatus, operation determining method and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0612177A (en) * 1992-06-29 1994-01-21 Canon Inc Information inputting method and device therefor
JP2004013314A (en) * 2002-06-04 2004-01-15 Fuji Xerox Co Ltd Position measuring input support device
JP2006209359A (en) * 2005-01-26 2006-08-10 Takenaka Komuten Co Ltd Apparatus, method and program for recognizing indicating action
US20090183125A1 (en) * 2008-01-14 2009-07-16 Prime Sense Ltd. Three-dimensional user interface

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608515A (en) * 2012-11-21 2018-01-19 英飞凌科技股份有限公司 The dynamic of imaging power is saved
CN107608515B (en) * 2012-11-21 2021-01-05 英飞凌科技股份有限公司 Dynamic saving of imaging power
CN104238737A (en) * 2013-06-05 2014-12-24 佳能株式会社 Information processing apparatus capable of recognizing user operation and method for controlling the same
CN104238737B (en) * 2013-06-05 2017-04-26 佳能株式会社 Information processing apparatus capable of recognizing user operation and method for controlling the same
CN104750243A (en) * 2013-12-27 2015-07-01 日立麦克赛尔株式会社 Image projection device
CN104750243B (en) * 2013-12-27 2018-02-23 日立麦克赛尔株式会社 Image projection device
CN103713823A (en) * 2013-12-30 2014-04-09 深圳泰山在线科技有限公司 Method and system for updating operation area position in real time
CN107015644A (en) * 2017-03-22 2017-08-04 腾讯科技(深圳)有限公司 Virtual scene middle reaches target position adjustments method and device
CN107861403A (en) * 2017-09-19 2018-03-30 珠海格力电器股份有限公司 Press key locking control method, device, storage medium and the electrical equipment of a kind of electrical equipment
CN113569635A (en) * 2021-06-22 2021-10-29 惠州越登智能科技有限公司 Gesture recognition method and system

Also Published As

Publication number Publication date
EP2466423B1 (en) 2018-07-04
CA2768893C (en) 2015-11-17
KR20120040211A (en) 2012-04-26
JP4701424B2 (en) 2011-06-15
WO2011018901A1 (en) 2011-02-17
CN102473041B (en) 2015-01-07
CN104615242A (en) 2015-05-13
CA2768893A1 (en) 2011-02-17
JP2011039844A (en) 2011-02-24
EP2466423A1 (en) 2012-06-20
KR101347232B1 (en) 2014-01-03
US8890809B2 (en) 2014-11-18
CA2886208A1 (en) 2011-02-17
EP2466423A4 (en) 2015-03-11
US20150130715A1 (en) 2015-05-14
US20120119988A1 (en) 2012-05-17
US9535512B2 (en) 2017-01-03

Similar Documents

Publication Publication Date Title
CN102473041A (en) Image recognition device, operation determination method, and program
CN102057347B (en) Image recognizing device, operation judging method, and program
CN103154858B (en) Input device and method and program
US9651782B2 (en) Wearable tracking device
CN103124945B (en) Pattern recognition device and operation judges method and program
US20120056989A1 (en) Image recognition apparatus, operation determining method and program
WO2013035758A1 (en) Information display system, information display method, and storage medium
US9658685B2 (en) Three-dimensional input device and input system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150107

Termination date: 20200812

CF01 Termination of patent right due to non-payment of annual fee