CN102236412A - Three-dimensional gesture recognition system and vision-based gesture recognition method - Google Patents
Three-dimensional gesture recognition system and vision-based gesture recognition method Download PDFInfo
- Publication number
- CN102236412A CN102236412A CN2010101742034A CN201010174203A CN102236412A CN 102236412 A CN102236412 A CN 102236412A CN 2010101742034 A CN2010101742034 A CN 2010101742034A CN 201010174203 A CN201010174203 A CN 201010174203A CN 102236412 A CN102236412 A CN 102236412A
- Authority
- CN
- China
- Prior art keywords
- gesture
- fundamental function
- gravity
- hand
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a three-dimensional gesture recognition system and a vision-based gesture recognition method. The method comprises the following steps of: receiving an image frame; extracting a hand contour image from the image frame; computing the center of gravity of the hand contour image; obtaining a plurality of contour points from the contour of the hand contour image; computing a plurality of distances from the center of gravity and the plurality of contour points; and recognizing a gesture according to first characteristic functions of the plurality of contour points. In the embodiment, the number of fingers and a gesture direction can be determined according to the number of the first characteristic functions and the position of at least one peak.
Description
Technical field
The present invention relates generally to the gesture identification system, particularly, relate to have than low-complexity based on vision ground gesture identification system.
Background technology
Friendly interaction between the mankind and the computing machine is the key element of entertainment systems exploitation, particularly at games system.The fast development of motion analysis system and computerized equipment has imported the possible new method with the computing machine interaction.Yet many current programmes are used the sensing apparatus that needs to rely on user's finger usually.Though this method can provide accurate hand to detect, and has also increased user's burden.Preferred methods be with hand as command facility, in other words, exactly with the action instruction is input to computer operating system or peripheral opertaing device.Yet at present known method and system is all quite complicated and perfect inadequately.
According to name be called " a kind of system that is used for quick identification computer graphics control gesture " the 6th, 002, No. 808 United States Patent (USP)s, it utilizes image moment to calculate determine the total equivalent rectangular corresponding with gesture, location and size, with the wide relevant embodiment of hand in be to judge with size.In a further embodiment, by the forefinger contact hole that thumb produced, provide the special reaction of being discerned to stimulate with the binary expansion formula (binary representation) of hand by corresponding aperture.In a further embodiment, detect, with the image on control or the guiding screen the image of object is instantaneous.
Be called the 7th, 129, No. 927 United States Patent (USP)s of " a kind of gesture recognition system " according to name, comprise detection components and generate the assembly of many arrangement marks on the corresponding objects, this assembly is handled the signal from detection components, and member is the position of mark in the detection signal then.Mark is divided into first group and second group, and first group of mark constitutes the reference position, and system comprises the assembly that detects second group of mark action and generate reference position effective action signal.
So, need a kind ofly to make the user unfettered or can be naturally and the interaction systems of computing machine interaction, promptly refer to the user can be without miscellaneous equipment, only with the hands can control.
Summary of the invention
Therefore, the purpose of this invention is to provide a kind of 3D gesture identification system,,, and reach live effect so that reduce computational complexity based on visual identity based on the gesture identification method and the system thereof of vision.
Can reach purpose of the present invention by a kind of gesture identification method method based on vision is provided, described method comprises the following steps: the acceptance pattern picture frame; From picture frame, extract the hand contour images; Calculate the center of gravity of hand contour images; Calculate the distance between described center of gravity and a plurality of point; First fundamental function according to described a plurality of distances is discerned gesture.
Best, described gesture identification method also comprises the following steps: to be provided with the setting reference point; Calculate first line between center of gravity and the reference point; Calculate second line between center of gravity and each point; Calculate the angle between first line and second line; First fundamental function is defined as the function of angle and distance.
Best, described gesture identification method also comprises the following steps: to provide the database of second fundamental function that possesses the described a plurality of predefine gestures of record; Calculate the value at cost (cost value) between first fundamental function and second fundamental function; And, from described a plurality of predefine gestures, select one as gesture according to value at cost.
Best, described gesture identification method also comprises the following steps: to determine whether there is spike in first fundamental function; If have at least one spike in first fundamental function, then determine gesture according to the quantity and the position of first fundamental function.
If there is not spike in first fundamental function, then determine the gesture fit.
Preferably determine the finger number that stretches out of gesture according to spike quantity.
Preferably determine the hand direction of gesture according to peak location.
Can reach purpose of the present invention by the gesture identification system based on vision is provided.Described system comprises image-capturing unit, graphics processing unit, data processing unit and gesture identification unit block.Image-capturing unit can receive picture frame, and then, graphics processing unit extracts the hand contour images from picture frame, and calculates the center of gravity of hand contour images.Data processing unit can be obtained a plurality of point on the profile that is positioned at the hand contour images, and calculates a plurality of distances between center of gravity and the described a plurality of point.Gesture can be discerned according to first fundamental function of described a plurality of distances in the gesture identification unit.
Best, data processing unit further calculates the angle between first line and a plurality of second line, and first fundamental function is defined as the function of angle and distance, and wherein, first line connects center of gravity and reference point, and each second line connects center of gravity and each point.
Best, described system also comprises: database, write down second fundamental function of a plurality of predefine gestures.The gesture identification unit also calculates the value at cost between first fundamental function and second fundamental function, and according to value at cost, selects in a plurality of pre-defined gestures one as gesture.
Best, at least one spike of first fundamental function is determined in the gesture identification unit, and discerns gesture according to the spike quantity and the position of first fundamental function.
If there is not spike in first fundamental function, then preferably the gesture identification unit determines that gesture is a fit.
Best, the index that reaches of gesture is determined according to the spike number in the gesture identification unit, and determines the hand direction of gesture according to peak location.
Can reach purpose of the present invention by 3D gesture identification system is provided.Described system comprises first image-capturing unit, second image-capturing unit, graphics processing unit, data processing unit and gesture identification unit.First image-capturing unit receives first picture frame, and second image-capturing unit receives second picture frame.Graphics processing unit can extract first-hand contouring image and second-hand's contouring image from first picture frame and second picture frame respectively, and calculates first center of gravity of first-hand contouring image and second center of gravity of second-hand's contouring image.Then, data processing unit is obtained a plurality of first point on the profile that is positioned at first-hand contouring image, and obtain a plurality of second point on the profile that is positioned at second-hand's contouring image, and calculate a plurality of first distances between first center of gravity and a plurality of first point, and calculate a plurality of second distances between second center of gravity and a plurality of second point.First gesture can be discerned according to first fundamental function of a plurality of first distances in the gesture identification unit, and discerns second gesture according to second fundamental function of a plurality of second distances, and determines the 3D gesture according to first gesture and second gesture.
Best, first gesture is discerned according to quantity and at least one peak location of first fundamental function in the gesture identification unit, and discerns second gesture according to quantity and at least one peak location of second fundamental function.
Description of drawings
The accompanying drawing that the present invention comprised is used for further understanding invention, inventive embodiments is described and explains inventive principle with explanation.
Fig. 1 is with the embodiment process flow diagram based on the gesture identification method of vision according to of the present invention;
Fig. 2 is according to hand images synoptic diagram of the present invention;
Fig. 3 is according to hand contour images synoptic diagram of the present invention;
Fig. 4 is according to distance corresponding with point of the present invention and the first example waveform figure of angle;
Fig. 5 figure is according to distance corresponding with point of the present invention and the second example waveform figure of angle;
Fig. 6 is according to distance corresponding with point of the present invention and the 3rd example waveform figure of angle;
Fig. 7 is the calcspar according to the video embodiment based on the gesture identification system of the present invention; And
Fig. 8 is according to 3D gesture identification system embodiment calcspar of the present invention.
Embodiment
In following detailed description, please cooperate the application's patent icon reference number of enclosing, the effect of numeral is to illustrate the specific embodiment of invention.Be familiar with the operator and can use other embodiment, and under the situation that does not break away from the scope of application of the present invention, carry out structure, electricity and change of program.Under the possibility situation, will in diagram, use same numeral to indicate same or similar part fully.
Fig. 1 is according to the video embodiment process flow diagram based on the gesture identification method of the present invention.This embodiment comprises the following steps.In step 10, receive picture frame, then in step 11, judge in the picture frame that is received whether have hand images.If do not have hand images in the picture frame that receives, then repeating step 10; In addition, if having hand images in the picture frame that receives, hand images 21 as shown in Figure 2 then in step 12, is extracted the hand contour images in the picture frame that receives.Best, can carry out rim detection at hand images 21, to extract hand profile, because hand profile 22 is around image-region 23, so can towering zone definitions be the hand contour images with the edge of hand images 21 as the hand profile 22 that shows among Fig. 2.
In step 13, calculate the center of gravity of hand contour images.Preferably can carry out the palm location Calculation, to obtain the center of gravity of hand contour images 23.For example, can according to the common 2D shape of hand select moment function (moment function) I (x, y).According to selected moment function, calculate the M of moment (first-order moment) and second moment (second-order moment) then
00, M
10, M
01, M
11, M
20And M
02It below is example function.
According to following function calculation moment M
00, M
10And M
01, in the hope of center of gravity.Fig. 3 illustrates the example location of center of gravity 24.
Can calculate x by array function under the basis
c, y
c, M
00, M
11, M
20And M
02, and then draw the long L of equivalent rectangular
1With wide L
2
In step 14, obtain a plurality of point on the profile that is positioned at the hand contour images, for example, the point 26 shown in Fig. 3 figure along 22 configurations of hand outline line.In step 15, calculate a plurality of distances between center of gravity and a plurality of point, as shown in Figure 3 apart from d.In step 16, discern gesture according to first fundamental function of a plurality of distances.Best, first fundamental function can be center of gravity, reference point and a plurality of distances of point formation and the function of angle.At Fig. 3, angle is formed with second line 272 that center of gravity is connected with one of them point 26 by first line 271 that center of gravity is connected with reference point 25.
Fig. 4 is according to the distance corresponding with point of the present invention and the fundamental function oscillogram of angle, wherein, transverse axis is made as angle, and vertical pivot is made as distance.Best, the standardization that waveform uses is apart from reducing the influence that different hand overall sizes are caused.
The finger scope is less than the palm scope, so the center of gravity of hand contour images is usually located at the central part of palm.When the user shows when thrusting out one's fingers posture, the distance between finger top and the center of gravity can be than the distance between other point and the center of gravity.Therefore, it should be noted that and to determine that with being present in spike in the waveform whether the hand contour images is the image of (figure) posture of thrusting out one's fingers.Best, can determine the finger number of described posture with spike quantity.In an embodiment, can define angular range and distance threshold to confirm whether there is spike in the waveform.In the angular range of definition,, then can determine in the angular range of definition, to have spike, as the waveform that illustrates respectively among Fig. 4 and Fig. 6 if local maximum and variable in distance are arranged greater than distance threshold.In addition, if local maximum is positioned at the angular range of definition, but there is not spike in variable in distance, oscillogram as shown in Figure 5 less than distance threshold in the angular range of then determining to define.Angular range according to definition can be divided into whole waveform seven parts and confirm whether there is spike.
Best, can determine the direction of hand contour images by reference point locations in the image and the peak location in the waveform.For example, if, then can determining the posture direction that thrusts out one's fingers at the right hand edge of image and spike is present between the scopes of 140 degree and 220 degree, reference point is exposed to the west.In waveform shown in Figure 4, exist a spike and its angle between 150 degree and 200 degree, and reference point is positioned at the right hand edge of image, is the attitude that thrusts out one's fingers of being exposed to the west so can judge the hand contour images.In Fig. 5, obtain waveform according to center of gravity 281 and the reference point 282 that defines.Owing to do not have spike in the waveform, so the hand contour images is defined as fit.In Fig. 6, reference point 292 according to center of gravity 291 and definition obtains waveform, and exist five spikes and its angle position in the waveform between 150 degree and 250 degree, and reference point 292 is positioned at the image base, is to stretch out the posture that five fingers point to the north so can judge the hand contour images.
Fig. 7 is the embodiment calcspar according to the gesture identification system based on vision of the present invention.Present embodiment comprises image-capturing unit 41, graphics processing unit 42, data processing unit 43, gesture identification unit 44 and database 45.Image-capturing unit 41 can receive picture frame 411, and graphics processing unit 42 extracts hand contour images 421 from picture frame 411 then, and calculates the center of gravity 422 of hand contour images 421.Data processing unit 43 can obtain to be positioned at a plurality of point 431 on the profile 423 of hand contour images 421, and calculates a plurality of distances 432 between center of gravity 422 and a plurality of point 431.Best, image-capturing unit 41 is video camera or web camera.Best, data processing unit 43 can further be handled the interior angle 433 that center of gravity 422, reference point and point 431 form, angle θ as shown in Figure 3.
Gesture 441 can be discerned according to first fundamental function 442 of distance 432 in gesture identification unit 44.Second fundamental function of a plurality of predefine gestures of database 45 records.Best, gesture identification unit 44 can calculate the value at cost (cost value) 443 between first fundamental function 442 and a plurality of second fundamental function 452, and selects one as gesture 441 from a plurality of predefine gestures according to value at cost 443.For example, first fundamental function 442 and second fundamental function, 452 boths can be the functions of a plurality of distances 432 and interior angle 433, as Fig. 4, Fig. 5 or waveform shown in Figure 6.The waveform difference between first function 422 and each second function 452 can be calculated by gesture identification system 44, and described difference is defined as value at cost 443, so gesture identification unit 44 will be selected to be gesture 44 with having to be elected to be with the corresponding predefine gesture 1 of second fundamental function of first fundamental function, 442 minor increments.
Best, gesture identification unit 44 can be according to the waveform spike quantity and the peak location of corresponding first fundamental function 442, the gesture 441 that identification and hand contour images 421 are corresponding.For example, can will exist spike to be used for determining whether the hand contour images is the posture that thrusts out one's fingers in the waveform, and available spike quantity determines the finger quantity of stretching out, and the direction that can determine gesture 441 by the position and the peak location in the waveform of reference point in the image.
Fig. 8 is the calcspar according to 3D gesture identification of the present invention system.In the present embodiment, described system comprises first image-capturing unit 501, the second image capture systems 502, graphics processing unit 52, data processing unit 53 and the gesture identification unit 54.First image-capturing unit 501 receives first picture frame 511, and second image-capturing unit 502 receives second picture frame 512.
Graphics processing unit 52 can extract first-hand contouring image 5211 and second-hand's contouring image 5212 respectively from first picture frame 511 and second picture frame 512.
Data processing unit 53 is obtained a plurality of first point 5311 on the profile 5231 that is positioned at first-hand contouring image 5211 then, obtain a plurality of second point 5312 on the profile 5232 that is positioned at second-hand's contouring image 5212, calculate a plurality of first distances 5321 between first center of gravity 5221 and a plurality of first point 5311, and calculate a plurality of second distances 5322 between second center of gravity 5222 and a plurality of second point 5312.
First gesture 541 can be discerned according to the fundamental function of a plurality of first distances in gesture identification unit 54, can discern second gesture 542 according to the fundamental function of a plurality of second distances, and judges 3D gesture 543 according to first gesture 541 and second gesture 542.Best, first gesture 541 and second gesture 542 can be discerned according to quantity and at least one peak location of fundamental function in gesture identification unit 54.
Can be by such as microprocessor, controller, microcontroller or encodedly realize above-mentioned functions or unit with the special IC (ASIC) of carrying out function.Based on explanation of the present invention, for a person skilled in the art, the design of coding, exploitation and realization are conspicuous.
To those skilled in the art, under the situation that does not break away from the spirit of the present invention or the scope of application, can modifications and variations of the present invention are.Therefore, the invention is intended to cover the interior of the present invention various modifications and variations of scope of claim and equivalent thereof.
Claims (15)
1. the gesture identification method based on vision is characterized in that, described method comprises:
Receive picture frame;
Extract the hand contour images from described picture frame;
Calculate the center of gravity of hand contour images;
Obtain a plurality of point on the profile of hand contour images;
Calculate the distance between described center of gravity and the described a plurality of point; And
First fundamental function according to described a plurality of distances is discerned gesture.
2. as claim 1 a described gesture identification method based on vision, wherein, the step of identification gesture further comprises:
Reference point is set;
Calculate first line between described center of gravity and the described reference point;
Calculate second line between described center of gravity and each point;
Calculate the angle between first line and second line; And
First fundamental function is defined as the function of described angle and described distance.
3. as claim 2 a described gesture identification method based on vision, wherein, the step of identification gesture further comprises:
The database of second fundamental function of a plurality of predefine gestures of record is provided;
Calculate the value at cost between first fundamental function and second fundamental function; And
According to value at cost, from a plurality of pre-defined gestures, select one as described gesture.
4. as claim 2 a described gesture identification method based on vision, wherein, the step of identification gesture further comprises:
Determine whether there is spike in first fundamental function; With
If there is at least one spike in first fundamental function, then determine described gesture according to the quantity and the position of the spike of first fundamental function.
5. as claim 4 a described gesture identification method, further comprise based on vision:
If there is not spike in first fundamental function, determine that then described gesture is a fit.
6. as claim 4 a described gesture identification method, further comprise: the finger quantity of stretching out of determining described gesture according to spike quantity based on vision.
7. as claim 4 a described gesture identification method, further comprise: the hand direction of determining described gesture according to peak location based on vision.
8. gesture identification system based on vision, described system comprises:
Image-capturing unit receives picture frame;
Graphics processing unit extracts the hand contour images from described picture frame, and calculates the center of gravity of hand contour images;
Data processing unit is obtained a plurality of point on the profile that is positioned at the hand contour images, and calculates a plurality of distances between described center of gravity and the described a plurality of point, and
Gesture is discerned according to first fundamental function of described a plurality of distances in the gesture identification unit.
9. the gesture identification system based on vision as claimed in claim 8, wherein, data processing unit further calculates the angle between first line and a plurality of second line, and first fundamental function is defined as the function of described angle and described a plurality of distances, wherein, first line connects described center of gravity and reference point, and each second line connects described center of gravity and each point.
10. the gesture identification system based on vision as claimed in claim 9, further comprise: database, write down second fundamental function of a plurality of predefine gestures, wherein, the gesture identification unit further calculates the value at cost between first fundamental function and second fundamental function, and, be chosen as described gesture with one in a plurality of pre-defined gestures according to described value at cost.
11. the gesture identification system based on vision as claimed in claim 9, wherein, the gesture identification unit further determines whether there is at least one spike in first fundamental function, and discerns described gesture according to the spike quantity and the position of first fundamental function.
12. the gesture identification system based on vision as claimed in claim 11, wherein, if there is not spike in first fundamental function, then the gesture identification unit determines that described gesture is a fit.
13. as claim 11 a described gesture identification system based on vision, wherein, the finger quantity of stretching out is determined according to the quantity of spike in the gesture identification unit, and determines the hand direction of gesture according to the position of spike.
14. a three-dimension gesture recognition system, its spy just is that described system comprises:
First image-capturing unit receives first picture frame;
Second image-capturing unit receives second picture frame;
Graphics processing unit extracts first-hand contouring image and second-hand's contouring image from first picture frame and second picture frame respectively, and calculates first center of gravity of first-hand contouring image, and second center of gravity of second-hand's contouring image;
Data processing unit, acquisition is positioned at a plurality of first point on the profile of first-hand contouring image, and obtain a plurality of second point on the profile that is positioned at second-hand's contouring image, and calculate a plurality of first distances between first center of gravity and a plurality of first point, and calculate a plurality of second distances between second center of gravity and described a plurality of second point;
The gesture identification unit identifies first gesture according to first function characteristic of described a plurality of first distances, and identifies second gesture according to second function characteristic of described a plurality of second distances, and judges three-dimension gesture according to first gesture and second gesture.
15. three-dimension gesture recognition system as claimed in claim 14, wherein, first gesture is discerned according to quantity and at least one peak location of first fundamental function in the gesture identification unit, and discerns second gesture according to quantity and at least one peak location of second fundamental function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101742034A CN102236412A (en) | 2010-04-30 | 2010-04-30 | Three-dimensional gesture recognition system and vision-based gesture recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010101742034A CN102236412A (en) | 2010-04-30 | 2010-04-30 | Three-dimensional gesture recognition system and vision-based gesture recognition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102236412A true CN102236412A (en) | 2011-11-09 |
Family
ID=44887133
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010101742034A Pending CN102236412A (en) | 2010-04-30 | 2010-04-30 | Three-dimensional gesture recognition system and vision-based gesture recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102236412A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521579A (en) * | 2011-12-21 | 2012-06-27 | Tcl集团股份有限公司 | Method for identifying pushing action based on two-dimensional planar camera and system |
CN102779268A (en) * | 2012-02-06 | 2012-11-14 | 西南科技大学 | Hand swing motion direction judging method based on direction motion historigram and competition mechanism |
CN103019389A (en) * | 2013-01-12 | 2013-04-03 | 福建华映显示科技有限公司 | Gesture recognition system and gesture recognition method |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
CN105034006A (en) * | 2015-08-28 | 2015-11-11 | 昆山塔米机器人有限公司 | Finger-guessing game robot based on LeapMotion apparatus and finger-guessing game gesture recognition method |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
CN106054627A (en) * | 2016-06-12 | 2016-10-26 | 珠海格力电器股份有限公司 | Control method and device based on gesture recognition and air conditioner |
US9495613B2 (en) | 2012-01-17 | 2016-11-15 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging using formed difference images |
CN106326860A (en) * | 2016-08-23 | 2017-01-11 | 武汉闪图科技有限公司 | Gesture recognition method based on vision |
CN106372564A (en) * | 2015-07-23 | 2017-02-01 | 株式会社理光 | Gesture identification method and apparatus |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
CN107261498A (en) * | 2017-06-08 | 2017-10-20 | 深圳市乃斯网络科技有限公司 | Game control method and system of the terminal based on gesture |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
CN109416570A (en) * | 2015-12-31 | 2019-03-01 | 微软技术许可有限责任公司 | Use the hand gestures API of finite state machine and posture language discrete value |
US10585193B2 (en) | 2013-03-15 | 2020-03-10 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US10739862B2 (en) | 2013-01-15 | 2020-08-11 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US11740705B2 (en) | 2013-01-15 | 2023-08-29 | Ultrahaptics IP Two Limited | Method and system for controlling a machine according to a characteristic of a control object |
US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
US11868687B2 (en) | 2013-10-31 | 2024-01-09 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
-
2010
- 2010-04-30 CN CN2010101742034A patent/CN102236412A/en active Pending
Non-Patent Citations (2)
Title |
---|
丁海洋等: "多尺度模型与矩描述子相结合的手势识别算法", 《北方交通大学学报》 * |
苏九林等: "基于关键点的一种字母手势识别算法", 《计算机工程》 * |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521579A (en) * | 2011-12-21 | 2012-06-27 | Tcl集团股份有限公司 | Method for identifying pushing action based on two-dimensional planar camera and system |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US9945660B2 (en) | 2012-01-17 | 2018-04-17 | Leap Motion, Inc. | Systems and methods of locating a control object appendage in three dimensional (3D) space |
US9672441B2 (en) | 2012-01-17 | 2017-06-06 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US10699155B2 (en) | 2012-01-17 | 2020-06-30 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9153028B2 (en) | 2012-01-17 | 2015-10-06 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US10565784B2 (en) | 2012-01-17 | 2020-02-18 | Ultrahaptics IP Two Limited | Systems and methods for authenticating a user according to a hand of the user moving in a three-dimensional (3D) space |
US10767982B2 (en) | 2012-01-17 | 2020-09-08 | Ultrahaptics IP Two Limited | Systems and methods of locating a control object appendage in three dimensional (3D) space |
US9436998B2 (en) | 2012-01-17 | 2016-09-06 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
US10410411B2 (en) | 2012-01-17 | 2019-09-10 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US11782516B2 (en) | 2012-01-17 | 2023-10-10 | Ultrahaptics IP Two Limited | Differentiating a detected object from a background using a gaussian brightness falloff pattern |
US9495613B2 (en) | 2012-01-17 | 2016-11-15 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging using formed difference images |
US10366308B2 (en) | 2012-01-17 | 2019-07-30 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US11720180B2 (en) | 2012-01-17 | 2023-08-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US9697643B2 (en) | 2012-01-17 | 2017-07-04 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US9652668B2 (en) | 2012-01-17 | 2017-05-16 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9934580B2 (en) | 2012-01-17 | 2018-04-03 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US11308711B2 (en) | 2012-01-17 | 2022-04-19 | Ultrahaptics IP Two Limited | Enhanced contrast for object detection and characterization by optical imaging based on differences between images |
US9741136B2 (en) | 2012-01-17 | 2017-08-22 | Leap Motion, Inc. | Systems and methods of object shape and position determination in three-dimensional (3D) space |
US9767345B2 (en) | 2012-01-17 | 2017-09-19 | Leap Motion, Inc. | Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections |
US9778752B2 (en) | 2012-01-17 | 2017-10-03 | Leap Motion, Inc. | Systems and methods for machine control |
CN102779268A (en) * | 2012-02-06 | 2012-11-14 | 西南科技大学 | Hand swing motion direction judging method based on direction motion historigram and competition mechanism |
CN102779268B (en) * | 2012-02-06 | 2015-04-22 | 西南科技大学 | Hand swing motion direction judging method based on direction motion historigram and competition mechanism |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
US10097754B2 (en) | 2013-01-08 | 2018-10-09 | Leap Motion, Inc. | Power consumption in motion-capture systems with audio and optical signals |
CN103019389B (en) * | 2013-01-12 | 2016-05-18 | 福建华映显示科技有限公司 | Gesture identification system and gesture identification |
CN103019389A (en) * | 2013-01-12 | 2013-04-03 | 福建华映显示科技有限公司 | Gesture recognition system and gesture recognition method |
US11353962B2 (en) | 2013-01-15 | 2022-06-07 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US11740705B2 (en) | 2013-01-15 | 2023-08-29 | Ultrahaptics IP Two Limited | Method and system for controlling a machine according to a characteristic of a control object |
US11874970B2 (en) | 2013-01-15 | 2024-01-16 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US10739862B2 (en) | 2013-01-15 | 2020-08-11 | Ultrahaptics IP Two Limited | Free-space user interface and control using virtual constructs |
US11693115B2 (en) | 2013-03-15 | 2023-07-04 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
US10585193B2 (en) | 2013-03-15 | 2020-03-10 | Ultrahaptics IP Two Limited | Determining positional information of an object in space |
US11099653B2 (en) | 2013-04-26 | 2021-08-24 | Ultrahaptics IP Two Limited | Machine responsiveness to dynamic user movements and gestures |
US10452151B2 (en) | 2013-04-26 | 2019-10-22 | Ultrahaptics IP Two Limited | Non-tactile interface systems and methods |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
US10846942B1 (en) | 2013-08-29 | 2020-11-24 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11282273B2 (en) | 2013-08-29 | 2022-03-22 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11776208B2 (en) | 2013-08-29 | 2023-10-03 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US11461966B1 (en) | 2013-08-29 | 2022-10-04 | Ultrahaptics IP Two Limited | Determining spans and span lengths of a control object in a free space gesture control environment |
US11775033B2 (en) | 2013-10-03 | 2023-10-03 | Ultrahaptics IP Two Limited | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US11868687B2 (en) | 2013-10-31 | 2024-01-09 | Ultrahaptics IP Two Limited | Predictive information for free space gesture control and communication |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
US11778159B2 (en) | 2014-08-08 | 2023-10-03 | Ultrahaptics IP Two Limited | Augmented reality with motion sensing |
CN106372564A (en) * | 2015-07-23 | 2017-02-01 | 株式会社理光 | Gesture identification method and apparatus |
CN105034006A (en) * | 2015-08-28 | 2015-11-11 | 昆山塔米机器人有限公司 | Finger-guessing game robot based on LeapMotion apparatus and finger-guessing game gesture recognition method |
CN109416570B (en) * | 2015-12-31 | 2022-04-05 | 微软技术许可有限责任公司 | Hand gesture API using finite state machines and gesture language discrete values |
CN109416570A (en) * | 2015-12-31 | 2019-03-01 | 微软技术许可有限责任公司 | Use the hand gestures API of finite state machine and posture language discrete value |
CN106054627A (en) * | 2016-06-12 | 2016-10-26 | 珠海格力电器股份有限公司 | Control method and device based on gesture recognition and air conditioner |
CN106326860A (en) * | 2016-08-23 | 2017-01-11 | 武汉闪图科技有限公司 | Gesture recognition method based on vision |
CN107261498A (en) * | 2017-06-08 | 2017-10-20 | 深圳市乃斯网络科技有限公司 | Game control method and system of the terminal based on gesture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102236412A (en) | Three-dimensional gesture recognition system and vision-based gesture recognition method | |
Dominio et al. | Combining multiple depth-based descriptors for hand gesture recognition | |
US20110268365A1 (en) | 3d hand posture recognition system and vision based hand posture recognition method thereof | |
CN104636725B (en) | A kind of gesture identification method and system based on depth image | |
CN102231093B (en) | Screen locating control method and device | |
CN104809387B (en) | Contactless unlocking method and device based on video image gesture identification | |
US8373654B2 (en) | Image based motion gesture recognition method and system thereof | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
CN105849673A (en) | Human-to-computer natural three-dimensional hand gesture based navigation method | |
CN103150019A (en) | Handwriting input system and method | |
CN102467657A (en) | Gesture recognizing system and method | |
RU2014108820A (en) | IMAGE PROCESSOR CONTAINING A SYSTEM FOR RECOGNITION OF GESTURES WITH FUNCTIONAL FEATURES FOR DETECTING AND TRACKING FINGERS | |
CN103282858B (en) | Method and apparatus for detecting posture input | |
CN105447437A (en) | Fingerprint identification method and device | |
WO2012081012A1 (en) | Computer vision based hand identification | |
CN107688779A (en) | A kind of robot gesture interaction method and apparatus based on RGBD camera depth images | |
CN104007819A (en) | Gesture recognition method and device and Leap Motion system | |
KR101745651B1 (en) | System and method for recognizing hand gesture | |
CN104346816A (en) | Depth determining method and device and electronic equipment | |
CN101976330A (en) | Gesture recognition method and system | |
US10248231B2 (en) | Electronic device with fingerprint detection | |
CN111652017B (en) | Dynamic gesture recognition method and system | |
WO2016074625A1 (en) | Operation control method for flexible display device | |
CN111798487A (en) | Target tracking method, device and computer readable storage medium | |
CN104914989A (en) | Gesture recognition apparatus and control method of gesture recognition apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20111109 |