CN103984928A - Finger gesture recognition method based on field depth image - Google Patents

Finger gesture recognition method based on field depth image Download PDF

Info

Publication number
CN103984928A
CN103984928A CN201410213052.7A CN201410213052A CN103984928A CN 103984928 A CN103984928 A CN 103984928A CN 201410213052 A CN201410213052 A CN 201410213052A CN 103984928 A CN103984928 A CN 103984928A
Authority
CN
China
Prior art keywords
hand
point
finger
profile
chained list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410213052.7A
Other languages
Chinese (zh)
Other versions
CN103984928B (en
Inventor
史卓
玉珂
周长劭
李映辉
程源泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201410213052.7A priority Critical patent/CN103984928B/en
Publication of CN103984928A publication Critical patent/CN103984928A/en
Application granted granted Critical
Publication of CN103984928B publication Critical patent/CN103984928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a finger gesture recognition method based on a field depth image. The finger gesture recognition method comprises the steps that a field depth camera is started, and field depth video data are obtained; a man hand length is deduced, and a palm position and hand length data are determined; a hand spherical zone image is subjected to segmenting and cutting, and preprocessing is carried out; and fingertip recognition is carried out, and gesture recognition is carried out according to geometrical relationship. According to the method, a hand zone is obtained by cutting quickly based on the features of the field depth image, analyzing and processing are only carried out on a target zone, operation complexity is lowered, adaptability on dynamic field changing is good, a contour maximum concave point scanning algorithm is used for fingertip recognition, the robustness of fingertip recognition is improved, after a fingertip is accurately recognized, fingers are recognized according to the direction vectors of the fingers and the geometrical relationship of the fingers, and accordingly recognition of various gestures is provided. The method is simple, flexible and easy to achieve.

Description

Finger gesture recognition methods based on depth image
Technical field
The invention belongs to field of human-computer interaction, be specifically related to a kind of finger gesture recognition methods based on depth image.
Background technology
Current gesture identification method both domestic and external is roughly divided into 2 classes, based on wearing kind equipment with based on traditional vision.Gesture identification based on dressing equipment is to obtain finger motion characteristic from sensors such as data glove, position trackers, thereby import the analysis that computing machine uses neural network to carry out joint data into simultaneously, obtains gesture to reach man-machine interaction.Major advantage is to measure posture and the gesture of finger, but comparatively expensive comparatively speaking, is unfavorable for large-scale popularization application.Method based on traditional visual identity is utilized common camera collection gesture video or image information, then carries out identifying processing.Although which has been brought good man-machine interaction to user, but in order to improve the robustness of system and effectively to carry out hand position, hand shape, finger orientation etc. and be the extraction of feature, identification people need to wear coloured gloves, be installed with the clothes of particular requirement, and identification people's background needs unified color, therefore the method based on traditional visual identity is easily subject to the impact of the environmental factors such as position of background, light, camera.
Application number is that the Chinese invention patent application of CN201210331153 discloses " a kind of man-machine interaction method based on gesture identification ", the method is by processing gesture image/video stream, then carry out Hand Gesture Segmentation, set up gesture template, what wherein for Hand Gesture Segmentation and modeling, use is HSV Face Detection, then identify gesture, finally simulate mouse beacon and carry out alternately.Although the method can be identified, palm opens and these 2 kinds of simple gesture of clenching fist, for Hand Gesture Segmentation under complicated scene, take longer, good not for the real-time of man-machine interaction.
The patent No. is that the Chinese invention patent of CN200910198196 discloses a kind of " the finger tip localization method of indication gesture ", the method can be determined the finger tip position of indication gesture automatically, the background subtraction point-score that it adopts comes extracting hand region, what wherein use is also the method that area of skin color feature is extracted, advantage is that detection speed is fast, accurately and be easy to realize, deficiency is having relatively high expectations to the complicacy of scene and all-environment interference and noise.
Application number is that the Chinese invention patent application of CN201010279511 discloses a kind of " in man-machine interactive system, hand, indication point localization method and gesture are determined method ", first the method obtains image/video stream, then by image difference method, be partitioned into foreground image, image is carried out binary conversion treatment and obtains Minimum Convex Closure vertex set, in region constructed centered by each pixel summit of Minimum Convex Closure, necessarily comprise potential hand region again, application model is known method for distinguishing and is determined target from candidate region then.Also only have at present 2 kinds of gesture identification close and open.So the deficiency of this method detects whole image being carried out to Minimum Convex Closure pixel vertex set, workload is large, can not settle at one go hand zone location, and easily occur error, the pixel of camera is also had to certain requirement simultaneously.
Summary of the invention
Technical matters to be solved by this invention is the gesture in existing man-machine interaction, finger identification method, thereby existence is poor or too complicated to outside illumination condition, that presenting of the colour of skin changed is large insincere, to background complicacy, require high and calculated amount to cause greatly the problems such as performance is not high, a kind of finger gesture recognition methods based on depth image is provided, the method is according to the characteristic of depth image, adopt respective algorithms to locate fast hand in image/video stream, identification finger tip, finally successfully identify gesture, thereby improve dirigibility and the simplicity of man-machine interaction.
Design of the present invention is: by depth-of-field video camera, take depth image, the hand skeleton point of take is divided one and be take a border circular areas that particular value is radius as the center of circle, and this part is exactly the target area that we want analyzing and processing.Then clustering processing is carried out in target area, obtain hand mask, use the detection of convex closure collection and profile track algorithm to obtain the profile of hand, utilize improved sags and crests angle algorithm identified to go out finger tip, according to the geometric position of finger, identify different fingers.Finally according to some threshold conditions of finger number, finger orientation vector, geometric position and setting, gesture is precisely identified.
For addressing the above problem, the present invention is achieved by the following technical solutions:
A finger gesture recognition methods based on depth image, comprises the steps:
(1) open depth of field camera, obtain the process of depth of field video data,
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image;
(1.2) the voxel information of the every frame depth image obtaining in the video flowing of shooting being carried out to spatial alternation is the some cloud information in real space, obtains thus each pixel from the distance of depth of field camera, and obtains manipulator's skeleton point information.
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data,
(2.1) obtained some cloud information is processed, and according to formula, 1. calculated manipulator's height, wherein
Hr = 2 × d × tan θ × Hp HB
In formula, the height that Hr is manipulator to be measured, HB is background pixel height, Hp is the pixels tall of manipulator to be measured in taken the photograph image, d be manipulator from the distance of depth of field camera, θ is the vertical angle in depth of field camera horizontal direction;
(2.2), according to the Human Height of adding up in advance and pushing corresponding relation, obtain manipulator's pushing HL.
(3) to hand spheric region image partition, cutting, and carry out pretreated process,
(3.1) by the filtration based on apart from the degree of depth, remove all and palm of the hand distance and be greater than pushing half some data, quick obtaining hand data, wherein hand data are included in and take pushing half in the spherical field that radius, the palm of the hand are the center of circle,
HandPoint = { p | | p ( x , y , z ) - p ( x 0 , y 0 , z 0 ) | ≤ ( HL 2 ) }
In formula, p (x 0, y 0, z 0) be palm of the hand point, it finds the coordinate of hand skeleton point to be the position coordinates of palm of the hand point by skeleton point, and HL is pushing, and HandPoint is the hand point set of the target area that obtains;
(3.2) by K means clustering algorithm, the hand data that are cropped to are carried out to clustering processing;
(3.3) by min cluster number is set, to non-hand images, be that noise pixel clusters is filtered eliminating, obtain hand mask.
(4) finger tip is identified, and according to its geometric relationship, carries out the process of gesture identification,
(4.1) obtained hand mask is adopted to Moore neighborhood profile track algorithm, detect the appearance profile of hand mask, and obtain hand point chained list collection;
(4.2) appearance profile of obtained hand mask is adopted to Graham scanning algorithm, detect the convex closure collection of hand profile, and obtain the salient point chained list collection on hand profile;
(4.3) to the maximum depression points scanning algorithm of profile for the convex closure centralized procurement of the appearance profile of hand mask and hand profile, detect the maximum depression points between all salient points, and obtain the sags and crests chained list collection on hand profile;
(4.4), to the concavo-convex angle recognizer for the centralized procurement of sags and crests chained list on obtained hand profile, obtain finger tip point chained list collection;
(4.5) identify after each finger, start to identify gesture; For the identification of a gesture, first identify the number of finger, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are as three layers of decision tree, thereby realize the identification of gesture.
In step (3.2), the span of the K value in described K means clustering algorithm is 2.
In step (4.3), the detailed process of the maximum depression points scanning algorithm of described profile is as follows:
(4.3.1) copying the salient point chained list collection on hand profile is simultaneously initial sags and crests chained list collection;
(4.3.2) range formula to line to each hand profile depression points operating points between 2 adjacent salient points of the front and back of convex closure point chained list collection, that hand point chained list is concentrated successively, surveys the concave point of the ultimate range of its hand profile depression points to 2 salient point connection straight line;
(4.3.3) concave point of its ultimate range is inserted in sags and crests chained list collection between these 2 salient points;
(4.3.4) repeat to jump to step (4.3.2), until hand point chained list concentrates all detections complete;
(4.3.5) by above-mentioned iteration, obtaining its peaked point, is maximum depression points, and generates the concavo-convex collection point chained list on orderly hand profile.
In step (4.4), the detailed process of described concavo-convex angle recognizer is as follows:
(4.4.1) in the sags and crests chained list collection on hand profile, find from top to bottom a salient point P0 in order, and from its 2 directions in front and back, choose adjacent concave point P1 and concave point P2 respectively;
(4.4.2) from concave point P1 to salient point P0, salient point P0 makes 2 vectors to concave point P2, calculates the angle that it is ordered at salient point P0, if its angle is less than the threshold value of setting, salient point P0 point is identified as finger tip and deposits finger tip point chained list collection in;
If the sags and crests chained list collection (4.4.3) on hand profile has not also detected, repeating step (4.4.1) detects next candidate's salient point; Otherwise finish;
(4.4.4) calculate successively the distance that finger tip point chained list is concentrated every 2 adjacent and non-adjacent finger tip points, adjacent 2 finger tip points distance maximum and the big-and-middle public finger tip point of non-adjacent 2 finger tip points distance are defined as to thumb, and distance maximum finger tip point adjacent with thumb is defined as forefinger, with thumb non-adjacent and distance maximum finger tip point be defined as little finger, the finger tip point nearest with forefinger is defined as middle finger; Left finger tip point is defined as the third finger.
Compared with prior art, the present invention is according to the feature of depth image, Fast trim goes out hand region, only for target area, carry out analyzing and processing, simple operation, flexible, easily realizes, and has avoided using traditional background subtraction point-score to be partitioned into target, greatly reduce computational complexity, and it is good that dynamic scene is changed to adaptability; To finger tip, identification has adopted the maximum depression points scanning algorithm of profile, traditional 3 alignment algorithms that detection adopts for finger tip have been improved, overcome in the situation that resolution ratio of camera head is certain, cross nearly mistake and far all can cause the problem of true finger tip wrong identification, improved the robustness of finger tip identification, accurately identify after finger tip, according to the direction vector of finger and their geometric relationship, identify each finger, thereby the identification of various gestures is provided.Method of the present invention is easy, flexible, and easily realizes.
Accompanying drawing explanation
Fig. 1 is overall implementation framework of the present invention.
Fig. 2 is the hand images cutting system framework based on height.
Fig. 3 is the gesture cutting schematic diagram of one embodiment of the invention.
Fig. 4 is the hand mask schematic diagram of one embodiment of the invention.
Fig. 5 is the angle recognizer schematic diagram of one embodiment of the invention.
Fig. 6 is that the finger tip of one embodiment of the invention detects schematic diagram.
Fig. 7 is the finger identifying schematic diagram of one embodiment of the invention.
Embodiment
A finger gesture recognition methods based on depth image, program totally realizes block diagram as shown in Figure 1.Fig. 2 is the hand images cutting block diagram based on height, and hand cutting effect as shown in Figure 3, for obtaining the hand cutting effect shown in Fig. 3, comprises the steps:
(1) open depth of field camera, obtain the process of depth of field video data.
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image.
(1.2) the voxel information of the every frame depth image obtaining in the video flowing of shooting being carried out to spatial alternation is the some cloud information in real space, obtains thus each pixel from the distance of depth of field camera, and obtains manipulator's skeleton point information.Above-mentioned skeleton point is that the body sense equipment Kinect of Microsoft provides, and it has defined 20 articulation points and has represented a skeleton (under standing state), and the api function that directly calling Kinect during programming provides just can use the information such as position coordinates of skeleton point.
It is that the detailed process of the some cloud information in real space is changed automatically by the body sense equipment Kinect of Microsoft that the voxel information of described every frame depth image is carried out spatial alternation, and during use, Using API Function just can obtain the coordinate information of reference point.
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data.Here comprise two parts: personage's height is measured and hand data acquisition, and wherein hand data comprise palm of the hand position, pushing data etc.
(2.1) obtained some cloud information is processed, and according to formula, is 1. calculated manipulator's height,
Wherein
Hr = 2 × d × tan θ × Hp HB
In formula, Hr is manipulator's to be measured height, HB is background pixel height, Hp is the pixels tall of manipulator to be measured in taken the photograph image, d is that manipulator is from the distance of depth of field camera, it obtains from the video streaming image obtaining, and obtain manner is directly to call Kinect body sense device A PI function, and θ is the vertical angle in depth of field camera horizontal direction.In the present embodiment, the value of θ is 21.5 °, and the value of HB is 240.
(2.2), according to the Human Height of adding up in advance and pushing corresponding relation, obtain manipulator's pushing HL.Above-mentioned Human Height and pushing corresponding relation carry out multiple linear regression analysis by statistic software for calculation according to many data and draw.
(3) to hand spheric region image partition, cutting, and carry out pretreated process.Here also comprise two parts: by respective algorithms, hand spheric region carried out to image is cut apart and use related algorithm to process to the image after cutting apart.
Take palm of the hand coordinate as initial point, pushingly for diameter, hand spheric region is cut apart, be equivalent to carrying out a little a filtration based on apart from the degree of depth, remove and be allly greater than pushing cloud data with palm of the hand distance.This actual effect of bringing splits hand exactly from being with noisy background, improves target in complex environment fast and effectively and extracts accuracy.Then by related algorithms such as clusters, image is processed, carried out the detection of palm profile.
(3.1) by the filtration based on apart from the degree of depth, remove all and palm of the hand distance and be greater than pushing half some data, quick obtaining hand data, wherein hand data are included in and take pushing half in the spherical field that radius, the palm of the hand are the center of circle,
HandPoint = { p | | p ( x , y , z ) - p ( x 0 , y 0 , z 0 ) | ≤ ( HL 2 ) }
In formula, p (x 0, y 0, z 0) be palm of the hand point, it finds the coordinate of hand skeleton point to be the position coordinates of palm of the hand point by skeleton point, and HL is pushing, and HandPoint is the hand point set of the target area that obtains.
(3.2) by K means clustering algorithm, the hand data that are cropped to are carried out to clustering processing.
K value in described K means clustering algorithm is by the number of developer's specified class, and in the present embodiment, K value is got fixed value 2.
(3.3) by min cluster number is set, to non-hand images, be that noise pixel clusters is filtered eliminating, obtain hand mask, this hand mask is by 0 and 1 binary picture forming.As shown in Figure 4.
In the present embodiment, the minimum number pixel threshold of setting is 50 pixels.
(4) finger tip is identified, and according to its geometric relationship, carries out the process of gesture identification.According to related algorithm, finger tip is identified, by judgement, pointed the geometric relationships such as quantity, direction, angle and carry out gesture identification.
The finger tip detection method of employing based on contour curvature, and in conjunction with the characteristic of depth image, sags and crests angle recognizer has been proposed, this algorithm has overcome the deficiency that 3 conventional alignment methods detect finger tip, as the operand that lacks relative unchangeability, the distance from camera is had requirement and increases program.Then utilize spatial relation identification finger.Finally, by three hierarchical classifier methods, three layers of decision tree carry out analyzing and processing to gesture, thus identification gesture.
(4.1) obtained hand mask is adopted to Moore neighborhood profile track algorithm, detect the appearance profile of hand mask, and obtain hand point chained list collection.Wherein Moore neighborhood profile track algorithm is classical profile detection algorithm, just with it, determines hand profile in the present invention.
(4.2) appearance profile of obtained hand mask is adopted to Graham scanning algorithm, detect the convex closure collection of hand profile, and obtain the salient point chained list collection on hand profile.Wherein Graham scanning algorithm is also classical algorithm, is used in the present invention detecting hand convex closure collection.
(4.3) to the maximum depression points scanning algorithm of profile for the convex closure centralized procurement of the appearance profile of hand mask and hand profile, detect the maximum depression points between all salient points, and obtain the sags and crests chained list collection on hand profile.
The detailed process of the maximum depression points scanning algorithm of described profile is as follows:
(4.3.1) copying the salient point chained list collection on hand profile is simultaneously initial sags and crests chained list collection.
(4.3.2) range formula to line to each hand profile depression points operating points between 2 adjacent salient points of the front and back of convex closure point chained list collection, that hand point chained list is concentrated successively, surveys the concave point of the ultimate range of its hand profile depression points to 2 salient point connection straight line.
(4.3.3) concave point of its ultimate range is inserted in sags and crests chained list collection between these 2 salient points.
(4.3.4) repeat to jump to step (4.3.2), until hand point chained list concentrates all detections complete.
(4.3.5) by above-mentioned iteration, obtaining its peaked point, is maximum depression points, and generates the concavo-convex collection point chained list on orderly hand profile.
(4.4), to the concavo-convex angle recognizer for the centralized procurement of sags and crests chained list on obtained hand profile, obtain finger tip point chained list collection.
The detailed process of described concavo-convex angle recognizer is as follows:
(4.4.1) in the sags and crests chained list collection on hand profile, find from top to bottom a salient point P0 in order, and from its 2 directions in front and back, choose adjacent concave point P1 and concave point P2 respectively.
(4.4.2) from concave point P1 to salient point P0, salient point P0 makes 2 vectors to concave point P2, calculates the angle that it is ordered at salient point P0, if its angle is less than the threshold value of setting, salient point P0 point is identified as finger tip and deposits finger tip point chained list collection in.
If the sags and crests chained list collection (4.4.3) on hand profile has not also detected, repeating step (4.4.1) detects next candidate's salient point.Otherwise finish.
(4.4.4) calculate successively the distance that finger tip point chained list is concentrated every 2 adjacent and non-adjacent finger tip points, adjacent 2 finger tip points distance maximum and the big-and-middle public finger tip point of non-adjacent 2 finger tip points distance are defined as to thumb, and distance maximum finger tip point adjacent with thumb is defined as forefinger, with thumb non-adjacent and distance maximum finger tip point be defined as little finger, the finger tip point nearest with forefinger is defined as middle finger; Left finger tip point is defined as the third finger.
Fig. 5 and Fig. 6 are the finger tip detection methods that the present invention is based on contour curvature, and the concavo-convex angle recognizer proposing in conjunction with the characteristic of depth image is carried out the effect that finger tip detection moves.It is nearly all the method that adopts 3 alignment that traditional finger tip detects, and the major defect of this method is because threshold value value is fixed, and hand and camera hypotelorism are crossed and far all can be caused erroneous judgement, and need to do three point on a straight line and detect, increase sequential operation amount.Adopt sags and crests angle recognizer proposed by the invention effectively to address this problem.Sags and crests angle threshold value setting is 40 in the present embodiment.
(4.5) identify after each finger, start to identify gesture.For the identification of a gesture, first identify the number of finger, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are as three layers of decision tree, thereby realize the identification of gesture.Above-mentioned decision tree is by sample is carried out to inductive learning, generates corresponding decision tree or decision rule, a kind of mathematical method of then according to decision tree or rule, new data being classified, and in various sorting algorithms, decision tree is the most a kind of.Three layers of decision tree is exactly using above-mentioned three conditions classification foundation of one deck decision node in tree separately, thereby reaches classification object.
Fig. 7 is 4 processes of finger identification: process 1 identifies thumb and forefinger farthest according to the distance between all adjacent finger points; Process 2 is to rely on to identify little finger apart from thumb distance finger tip farthest; Process 3 is to lean on the finger tip nearest with forefinger to be judged as middle finger; Process 4 is left third fingers.
The processing procedure that hand detects and finger is identified is in the present invention to carry out when having each time the input of depth image data, if same object still exists in next frame depth image, and when profile just and former frame image is out of shape to some extent, all object properties will continue to quote the unique point that old depth map picture frame analysis draws, so just can reduce program work amount, raise the efficiency.
Identify after each finger, start to identify gesture.For the identification of a gesture, first identify the number of finger, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are as three layers of decision tree, thereby realize the identification of gesture.For example gesture " numeral 3 " and gesture " I love you ", first identify three fingers, then correspondence is pointed name, gesture " has been used thumb, forefinger and little finger of toe in " I love you "; this can easily judge different from " numeral 3 " gesture; same, " numeral 3 " used forefinger, middle finger and the third finger.For example gesture " numeral 2 " and Chinese gesture again " seven ", the hand exponential sum finger name that two gestures are used is all identical, can distinguish by the vector angle of two gestures, for " numeral 2 ", the vector angle of its two finger orientations must be an acute angle, and just can allow computing machine identify this while being less than the threshold value that we set is " numeral 2 ", the vector angle on its two finger orientations of same Chinese gesture " seven " is " seven " while being greater than the threshold value of our setting.

Claims (4)

1. the finger gesture recognition methods based on depth image, is characterized in that comprising the steps:
(1) open depth of field camera, obtain the process of depth of field video data,
(1.1) video flowing of the direct shooting background of depth of field camera and manipulator's whole body depth image;
(1.2) the voxel information of the every frame depth image obtaining in the video flowing of shooting being carried out to spatial alternation is the some cloud information in real space, obtains thus each pixel from the distance of depth of field camera, and obtains manipulator's skeleton point information;
(2) the pushing deduction of personage, determines the process of palm of the hand position and pushing data,
(2.1) obtained some cloud information is processed, and according to formula, 1. calculated manipulator's height, wherein
Hr = 2 × d × tan θ × Hp HB
In formula, the height that Hr is manipulator to be measured, HB is background pixel height, Hp is the pixels tall of manipulator to be measured in taken the photograph image, d be manipulator from the distance of depth of field camera, θ is the vertical angle in depth of field camera horizontal direction;
(2.2), according to the Human Height of adding up in advance and pushing corresponding relation, obtain manipulator's pushing HL;
(3) to hand spheric region image partition, cutting, and carry out pretreated process,
(3.1) by the filtration based on apart from the degree of depth, remove all and palm of the hand distance and be greater than pushing half some data, quick obtaining hand data, wherein hand data are included in and take pushing half in the spherical field that radius, the palm of the hand are the center of circle;
(3.2) by K means clustering algorithm, the hand data that are cropped to are carried out to clustering processing;
(3.3) by min cluster number is set, to non-hand images, be that noise pixel clusters is filtered eliminating, obtain hand mask;
(4) finger tip is identified, and according to its geometric relationship, carries out the process of gesture identification,
(4.1) obtained hand mask is adopted to Moore neighborhood profile track algorithm, detect the appearance profile of hand mask, and obtain hand point chained list collection;
(4.2) appearance profile of obtained hand mask is adopted to Graham scanning algorithm, detect the convex closure collection of hand profile, and obtain the salient point chained list collection on hand profile;
(4.3) to the maximum depression points scanning algorithm of profile for the convex closure centralized procurement of the appearance profile of hand mask and hand profile, detect the maximum depression points between all salient points, and obtain the sags and crests chained list collection on hand profile;
(4.4), to the concavo-convex angle recognizer for the centralized procurement of sags and crests chained list on obtained hand profile, obtain finger tip point chained list collection;
(4.5) identify after each finger, start to identify gesture; For the identification of a gesture, first identify the number of finger, then obtain the title of finger, and the direction vector of each finger and the angle between them, these three conditions are as three layers of decision tree, thereby realize the identification of gesture.
2. the finger gesture recognition methods based on depth image according to claim 1, is characterized in that, in step (3.2), the K value in described K means clustering algorithm is got fixed value 2.
3. the finger gesture recognition methods based on depth image according to claim 1, is characterized in that, in step (4.3), the detailed process of the maximum depression points scanning algorithm of described profile is as follows:
(4.3.1) copying the salient point chained list collection on hand profile is simultaneously initial sags and crests chained list collection;
(4.3.2) range formula to line to each hand profile depression points operating points between 2 adjacent salient points of the front and back of convex closure point chained list collection, that hand point chained list is concentrated successively, surveys the concave point of the ultimate range of its hand profile depression points to 2 salient point connection straight line;
(4.3.3) concave point of its ultimate range is inserted in sags and crests chained list collection between these 2 salient points;
(4.3.4) repeat to jump to step (4.3.2), until hand point chained list concentrates all detections complete;
(4.3.5) by above-mentioned iteration, obtaining its peaked point, is maximum depression points, and generates the concavo-convex collection point chained list on orderly hand profile.
4. the finger gesture recognition methods based on depth image according to claim 1, is characterized in that, in step (4.4), the detailed process of described concavo-convex angle recognizer is as follows:
(4.4.1) in the sags and crests chained list collection on hand profile, find from top to bottom a salient point P0 in order, and from its 2 directions in front and back, choose adjacent concave point P1 and concave point P2 respectively;
(4.4.2) from concave point P1 to salient point P0, salient point P0 makes 2 vectors to concave point P2, calculates the angle that it is ordered at salient point P0, if its angle is less than the threshold value of setting, salient point P0 point is identified as finger tip and deposits finger tip point chained list collection in;
If the sags and crests chained list collection (4.4.3) on hand profile has not also detected, repeating step (4.4.1) detects next candidate's salient point; Otherwise finish;
(4.4.4) calculate successively the distance that finger tip point chained list is concentrated every 2 adjacent and non-adjacent finger tip points, adjacent 2 finger tip points distance maximum and the big-and-middle public finger tip point of non-adjacent 2 finger tip points distance are defined as to thumb, and distance maximum finger tip point adjacent with thumb is defined as forefinger, with thumb non-adjacent and distance maximum finger tip point be defined as little finger, the finger tip point nearest with forefinger is defined as middle finger; Left finger tip point is defined as the third finger.
CN201410213052.7A 2014-05-20 2014-05-20 Finger gesture recognition methods based on depth image Active CN103984928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410213052.7A CN103984928B (en) 2014-05-20 2014-05-20 Finger gesture recognition methods based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410213052.7A CN103984928B (en) 2014-05-20 2014-05-20 Finger gesture recognition methods based on depth image

Publications (2)

Publication Number Publication Date
CN103984928A true CN103984928A (en) 2014-08-13
CN103984928B CN103984928B (en) 2017-08-11

Family

ID=51276890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410213052.7A Active CN103984928B (en) 2014-05-20 2014-05-20 Finger gesture recognition methods based on depth image

Country Status (1)

Country Link
CN (1) CN103984928B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104375647A (en) * 2014-11-25 2015-02-25 杨龙 Interaction method used for electronic equipment and electronic equipment
CN104503275A (en) * 2014-11-21 2015-04-08 深圳市超节点网络科技有限公司 Non-contact control method and equipment based on gestures
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
CN105929956A (en) * 2016-04-26 2016-09-07 苏州冰格智能科技有限公司 Virtual reality-based input method
CN106648063A (en) * 2016-10-19 2017-05-10 北京小米移动软件有限公司 Gesture recognition method and device
CN106709951A (en) * 2017-01-03 2017-05-24 华南理工大学 Finger joint positioning method based on depth image
CN106886741A (en) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 A kind of gesture identification method of base finger identification
CN106971135A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of slip gesture recognition methods
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
CN106971131A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method based on center
CN106970701A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture changes recognition methods
CN106971132A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 One kind scanning gesture simultaneously knows method for distinguishing
CN106980828A (en) * 2017-03-17 2017-07-25 深圳市魔眼科技有限公司 Method, device and the equipment of palm area are determined in gesture identification
CN107092347A (en) * 2017-03-10 2017-08-25 深圳市博乐信息技术有限公司 A kind of augmented reality interaction systems and image processing method
CN107272878A (en) * 2017-02-24 2017-10-20 广州幻境科技有限公司 A kind of recognition methods for being applied to complicated gesture and device
CN107341811A (en) * 2017-06-20 2017-11-10 上海数迹智能科技有限公司 The method that hand region segmentation is carried out using MeanShift algorithms based on depth image
CN107743219A (en) * 2017-09-27 2018-02-27 歌尔科技有限公司 Determination method and device, projecting apparatus, the optical projection system of user's finger positional information
CN108062525A (en) * 2017-12-14 2018-05-22 中国科学技术大学 A kind of deep learning hand detection method based on hand region prediction
CN108073283A (en) * 2017-12-07 2018-05-25 袁峰 The computational methods and gloves of hand joint
CN105718776B (en) * 2016-01-19 2018-06-22 桂林电子科技大学 A kind of three-dimension gesture verification method and system
WO2018157286A1 (en) * 2017-02-28 2018-09-07 深圳市大疆创新科技有限公司 Recognition method and device, and movable platform
CN108876968A (en) * 2017-05-10 2018-11-23 北京旷视科技有限公司 Recognition of face gate and its anti-trailing method
CN110888536A (en) * 2019-12-12 2020-03-17 北方工业大学 Finger interaction recognition system based on MEMS laser scanning
CN110889387A (en) * 2019-12-02 2020-03-17 浙江工业大学 Real-time dynamic gesture recognition method based on multi-track matching
CN111191632A (en) * 2020-01-08 2020-05-22 梁正 Gesture recognition method and system based on infrared reflection gloves
CN113781182A (en) * 2021-09-22 2021-12-10 拉扎斯网络科技(上海)有限公司 Behavior recognition method, behavior recognition device, electronic apparatus, storage medium, and program product
CN114170382A (en) * 2021-12-07 2022-03-11 深圳职业技术学院 High-precision three-dimensional reconstruction method and device based on numerical control machine tool

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090103780A1 (en) * 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN103208002A (en) * 2013-04-10 2013-07-17 桂林电子科技大学 Method and system used for recognizing and controlling gesture and based on hand profile feature

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090103780A1 (en) * 2006-07-13 2009-04-23 Nishihara H Keith Hand-Gesture Recognition Method
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN103208002A (en) * 2013-04-10 2013-07-17 桂林电子科技大学 Method and system used for recognizing and controlling gesture and based on hand profile feature

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周长邵等: ""基于景深图像的身高测量系统设计"", 《桂林电子科技大学学报》 *
郑斌汪等: ""基于Kinect深度信息的手指检测与手势识别"", 《TRANSACTIONS ON COMPUTER SCIENCE AND TECHNOLOGY》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503275A (en) * 2014-11-21 2015-04-08 深圳市超节点网络科技有限公司 Non-contact control method and equipment based on gestures
CN104375647B (en) * 2014-11-25 2017-11-03 杨龙 Exchange method and electronic equipment for electronic equipment
CN104375647A (en) * 2014-11-25 2015-02-25 杨龙 Interaction method used for electronic equipment and electronic equipment
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
CN104778460B (en) * 2015-04-23 2018-05-04 福州大学 A kind of monocular gesture identification method under complex background and illumination
CN106886741A (en) * 2015-12-16 2017-06-23 芋头科技(杭州)有限公司 A kind of gesture identification method of base finger identification
CN106971131A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method based on center
CN106971130A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture identification method using face as reference
CN106970701A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of gesture changes recognition methods
CN106971132A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 One kind scanning gesture simultaneously knows method for distinguishing
CN106971135A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of slip gesture recognition methods
CN105718776B (en) * 2016-01-19 2018-06-22 桂林电子科技大学 A kind of three-dimension gesture verification method and system
CN105929956A (en) * 2016-04-26 2016-09-07 苏州冰格智能科技有限公司 Virtual reality-based input method
CN106648063A (en) * 2016-10-19 2017-05-10 北京小米移动软件有限公司 Gesture recognition method and device
CN106709951A (en) * 2017-01-03 2017-05-24 华南理工大学 Finger joint positioning method based on depth image
CN106709951B (en) * 2017-01-03 2019-10-18 华南理工大学 A kind of finger-joint localization method based on depth map
CN107272878A (en) * 2017-02-24 2017-10-20 广州幻境科技有限公司 A kind of recognition methods for being applied to complicated gesture and device
US11250248B2 (en) * 2017-02-28 2022-02-15 SZ DJI Technology Co., Ltd. Recognition method and apparatus and mobile platform
WO2018157286A1 (en) * 2017-02-28 2018-09-07 深圳市大疆创新科技有限公司 Recognition method and device, and movable platform
CN107092347A (en) * 2017-03-10 2017-08-25 深圳市博乐信息技术有限公司 A kind of augmented reality interaction systems and image processing method
CN107092347B (en) * 2017-03-10 2020-06-09 深圳市博乐信息技术有限公司 Augmented reality interaction system and image processing method
CN106980828B (en) * 2017-03-17 2020-06-19 深圳市魔眼科技有限公司 Method, device and equipment for determining palm area in gesture recognition
CN106980828A (en) * 2017-03-17 2017-07-25 深圳市魔眼科技有限公司 Method, device and the equipment of palm area are determined in gesture identification
CN108876968A (en) * 2017-05-10 2018-11-23 北京旷视科技有限公司 Recognition of face gate and its anti-trailing method
CN107341811A (en) * 2017-06-20 2017-11-10 上海数迹智能科技有限公司 The method that hand region segmentation is carried out using MeanShift algorithms based on depth image
CN107341811B (en) * 2017-06-20 2020-11-13 上海数迹智能科技有限公司 Method for segmenting hand region by using MeanShift algorithm based on depth image
CN107743219A (en) * 2017-09-27 2018-02-27 歌尔科技有限公司 Determination method and device, projecting apparatus, the optical projection system of user's finger positional information
CN108073283A (en) * 2017-12-07 2018-05-25 袁峰 The computational methods and gloves of hand joint
CN108073283B (en) * 2017-12-07 2021-02-09 广州深灵科技有限公司 Hand joint calculation method and glove
CN108062525B (en) * 2017-12-14 2021-04-23 中国科学技术大学 Deep learning hand detection method based on hand region prediction
CN108062525A (en) * 2017-12-14 2018-05-22 中国科学技术大学 A kind of deep learning hand detection method based on hand region prediction
CN110889387A (en) * 2019-12-02 2020-03-17 浙江工业大学 Real-time dynamic gesture recognition method based on multi-track matching
CN110888536A (en) * 2019-12-12 2020-03-17 北方工业大学 Finger interaction recognition system based on MEMS laser scanning
CN110888536B (en) * 2019-12-12 2023-04-28 北方工业大学 Finger interaction recognition system based on MEMS laser scanning
CN111191632A (en) * 2020-01-08 2020-05-22 梁正 Gesture recognition method and system based on infrared reflection gloves
CN111191632B (en) * 2020-01-08 2023-10-13 梁正 Gesture recognition method and system based on infrared reflective glove
CN113781182A (en) * 2021-09-22 2021-12-10 拉扎斯网络科技(上海)有限公司 Behavior recognition method, behavior recognition device, electronic apparatus, storage medium, and program product
CN114170382A (en) * 2021-12-07 2022-03-11 深圳职业技术学院 High-precision three-dimensional reconstruction method and device based on numerical control machine tool

Also Published As

Publication number Publication date
CN103984928B (en) 2017-08-11

Similar Documents

Publication Publication Date Title
CN103984928A (en) Finger gesture recognition method based on field depth image
Zhou et al. A novel finger and hand pose estimation technique for real-time hand gesture recognition
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
Gurav et al. Real time finger tracking and contour detection for gesture recognition using OpenCV
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
Sarkar et al. Hand gesture recognition systems: a survey
Kim et al. Simultaneous gesture segmentation and recognition based on forward spotting accumulative HMMs
Ahuja et al. Static vision based Hand Gesture recognition using principal component analysis
CN103971102A (en) Static gesture recognition method based on finger contour and decision-making trees
Oprisescu et al. Automatic static hand gesture recognition using tof cameras
Wu et al. Robust fingertip detection in a complex environment
CN104407694A (en) Man-machine interaction method and device combining human face and gesture control
Pandey et al. Hand gesture recognition for sign language recognition: A review
CN106326860A (en) Gesture recognition method based on vision
She et al. A real-time hand gesture recognition approach based on motion features of feature points
CN103995595A (en) Game somatosensory control method based on hand gestures
CN106886741A (en) A kind of gesture identification method of base finger identification
Mesbahi et al. Hand gesture recognition based on convexity approach and background subtraction
CN108614988A (en) A kind of motion gesture automatic recognition system under complex background
CN103426000B (en) A kind of static gesture Fingertip Detection
Tang et al. Hand tracking and pose recognition via depth and color information
Chaudhary et al. A vision-based method to find fingertips in a closed hand
Choi et al. RGB-D camera-based hand shape recognition for human-robot interaction
Varshini et al. Dynamic fingure gesture recognition using KINECT
Vidhate et al. Virtual paint application by hand gesture recognition system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant