CN103399634A - Gesture recognition system and gesture recognition method - Google Patents

Gesture recognition system and gesture recognition method Download PDF

Info

Publication number
CN103399634A
CN103399634A CN2013103098280A CN201310309828A CN103399634A CN 103399634 A CN103399634 A CN 103399634A CN 2013103098280 A CN2013103098280 A CN 2013103098280A CN 201310309828 A CN201310309828 A CN 201310309828A CN 103399634 A CN103399634 A CN 103399634A
Authority
CN
China
Prior art keywords
gesture
staff
analysis module
module
movement locus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103098280A
Other languages
Chinese (zh)
Other versions
CN103399634B (en
Inventor
唐琪
冯声振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ACC Acoustic Technologies Shenzhen Co Ltd
AAC Technologies Holdings Nanjing Co Ltd
Original Assignee
ACC Acoustic Technologies Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ACC Acoustic Technologies Shenzhen Co Ltd filed Critical ACC Acoustic Technologies Shenzhen Co Ltd
Priority to CN201310309828.0A priority Critical patent/CN103399634B/en
Publication of CN103399634A publication Critical patent/CN103399634A/en
Application granted granted Critical
Publication of CN103399634B publication Critical patent/CN103399634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a gesture recognition system which comprises a video analysis module, an audio frequency analysis module and a synthetic judgment module. The synthetic judgment module receives human gestures and gesture reliability both output by the video analysis module and the audio frequency analysis module, and selects the human gestures of the video analysis module or the audio frequency analysis module according to the gesture reliability, and outputs the selected human gestures to an application program. The invention further provides a gesture recognition method; the synthetic judgment module controls the video analysis module and the audio frequency analysis module to work according bad surrounding conditions, receives the human gestures and the gesture reliability both output by the video analysis module and the audio frequency analysis module, and selects the human gestures by comparing the gesture reliability, so that reliability of the human gestures is improved, and application of gesture recognition in different environments is greatly improved.

Description

Gesture recognition system and recognition methods
[technical field]
The present invention relates to the gesture identification field, relate in particular to a kind of gesture recognition system and recognition methods.
[background technology]
Along with the development of personal electric product, the mutual mode of people and electronic equipment is also in continuous variation, and keyboard input from the beginning,, to touching input, occurred again the new interpersonal interactive mode of contactless gesture identification now.
The present mode of popular gesture identification or based on the gesture identification of video, or based on the gesture identification of audio frequency.Gesture identification based on video requires high to the bright degree in environment, such as, the background of gesture motion is yellowish pink, cause staff and background to be difficult to distinguish, perhaps background has the strong illumination camera, cause imaging unclear, perhaps the background illumination condition is very abominable, these the time optical environment abominable degree greatly reduce confidence level based on the gesture identification of video.And based on the gesture identification of audio frequency, to the acoustic jamming in environment, require high, such as, when the interference that exists in environment ultrasonic gesture identification audio section used, such as neighbouring a broadband noise source arranged, at this time just according to the strength ratio of the ultrasonic signal of the intensity of noise signal and system, reduced accordingly the confidence level of ultrasonic gesture identification.
Therefore,, for above-mentioned technical matters, be necessary to provide a kind of novel gesture recognition system and recognition methods.
[summary of the invention]
The object of the present invention is to provide a kind of gesture recognition system and recognition methods, its solved separately gesture identification based on video when optical environment is abominable or separately based on the gesture identification of audio frequency when acoustic enviroment is abominable, greatly reduce the technical matters of the confidence level of gesture identification.
in order to solve the problems of the technologies described above, the invention provides a kind of gesture recognition system, this system comprises: analysis module, it is used for obtaining from a camera head image and the movement locus of the electronic installation outrunner hand that this camera head absorbs, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse, the audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level, comprehensive judge module, it is used for controlling by analysis module and working independently or the audio analysis module works independently or analysis module and audio analysis module are worked simultaneously according to control signal of the application program of the described gesture recognition system of application, and receive staff gesture and the gesture confidence level that described analysis module and audio analysis module are exported respectively, pre-defined gesture confidence level rule, described comprehensive judge module selects according to this gesture confidence level rule the staff gesture that adopts analysis module or audio analysis module, and the staff gesture that will select is exported to described application program.
a kind of gesture identification method, the method comprises the steps: S1, comprehensive judge module receives the control signal of the application program output of the described gesture recognition system of application, and according to this control signal, controls by analysis module and work independently or the audio analysis module works independently or analysis module and audio analysis module are worked simultaneously, S2, described comprehensive judge module is controlled and is worked independently by analysis module, concrete steps are as follows: S21, described analysis module is obtained image and the movement locus of the electronic installation outrunner hand that this camera head absorbs from a camera head, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse, S22, comprehensive judge module receives staff gesture and the gesture confidence level of described analysis module output, pre-defined gesture confidence level rule, described comprehensive judge module selects to adopt the staff gesture of analysis module according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program, S3, described comprehensive judge module is controlled and is worked independently by the audio analysis module, concrete steps are as follows: S31, the audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level, S32, comprehensive judge module receives staff gesture and the gesture confidence level of described audio analysis module output, pre-defined gesture confidence level rule, described comprehensive judge module selects to adopt the staff gesture of audio analysis module according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program, S4, described comprehensive judge module is controlled and is worked simultaneously by analysis module and audio analysis module, concrete steps are as follows: S41, described analysis module is obtained image and the movement locus of the electronic installation outrunner hand that this camera head absorbs from a camera head, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse, S42, the audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level, S43, staff gesture and gesture confidence level that the described comprehensive judge module described analysis module of reception and audio analysis module are exported respectively, pre-defined gesture confidence level rule, described comprehensive judge module selects according to this gesture confidence level rule the staff gesture that adopts analysis module or audio analysis module, and the staff gesture that will select is exported to described application program.
Beneficial effect of the present invention is: the invention provides a kind of gesture recognition system and recognition methods, it controls the work of analysis module and audio analysis module according to the abominable situation of surrounding environment by comprehensive judge module, and staff gesture and the gesture confidence level of receiver, video analysis module and the output of audio analysis module, determine to select the staff gesture by comparing the gesture confidence level, so not only improve the confidence level of staff gesture, and greatly improved the application of gesture identification in varying environment.
[description of drawings]
Fig. 1 is the frame diagram of gesture recognition system of the present invention;
Fig. 2 is the workflow diagram between overall treatment module, analysis module, audio analysis module and application program in gesture recognition system of the present invention.
[embodiment]
Below in conjunction with drawings and embodiments, the present invention is described in further detail.Following examples are used for explanation the present invention, but are not used for limiting the scope of the invention.
As depicted in figs. 1 and 2, the invention provides a kind of gesture recognition system 1, this system comprises: analysis module 10, audio analysis module 11 and the comprehensive judge module 12 that is used for receiving described analysis module 10 and audio analysis module 11 output signals.
described analysis module 10, it is used for obtaining from a camera head 100 image and the movement locus of electronic installation 101 outrunner's hands that this camera head 100 absorbs, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, this staff characteristics of image storehouse and movement locus feature database are all existing, and with existing disposal route, compare and draw staff image feature value and movement locus eigenwert, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse.
The gesture identification scheme of described analysis module 10 and traditional gesture identification based on video are as good as, but in the present invention, compared to traditional video gesture identification, the present invention need to carry out the gesture confidence level of staff and calculate, the calculating of gesture confidence level has a variety of methods, below will describe for the embodiment of a kind of method in numerous computing method, specific as follows:
Analysis module 10 is actually utilizes optics to carry out gesture identification, and optics is to distinguish staff by the color of distinguishing near-end moving object color and background, and the aberration of near-end object and background is larger, and the confidence level of identification is higher, and we suppose:
The average rgb color of background area is (r ig ib i
The average rgb color of moving object is (r mg mb m);
Above-mentioned r represents redness, and g represents green, and b represents blueness; Wherein, r ig ib iThe mean value that represents respectively the red channel of all pixels in background area, the mean value of the green channel of all pixels in background area, the mean value of the blue channel of all pixels in background area, span separately is 0-255; r mg mb mRepresent respectively all pixels in the moving object profile the mean value of red channel, the mean value of the green channel of all pixels in the moving object profile, the mean value of the blue channel of all pixels in the moving object profile, span separately is 0-255.
RGB aberration tolerance formula is so:
D = ( r i - r m ) 2 + ( g i - g m ) 2 + ( b i - b m ) 2
The confidence level of optics staff shape recognition can be similar to and be expressed as so:
X 3 = D 255 * 3 * c
C is a constant coefficient, it is an empirical value that obtains according to experiment statistics, span is 0.1-10, mainly to consider different hardware environment (lens performance for example, the CCD performance, software performance etc.) confidence level of optics staff gesture identification there is certain influence, some complicated formula are wherein arranged, but consider that fixing hardware environment also fixes the impact of the confidence level of optics staff shape recognition, therefore we substitute these formula with a constant coefficient, and this constant is general to be drawn with the statistical method analysis according to a large amount of results of measuring.
Audio analysis module 11, it comprises be used to the sounding module 110 of sounding with for the sound collection module 111 that receives sound, described sounding module 110 is sounded, this sound is through being received by sound collection module 111 after the reflection of staff, thereby calculate the movement locus eigenwert of staff, this movement locus eigenwert is to calculate and get by the computing method of existing audio frequency gesture identification, then described movement locus eigenwert and default template characteristic value storehouse is relatively drawn staff gesture and gesture confidence level.
The audio frequency gesture identification has a variety of recognition methodss, but preferred, and audio frequency gesture identification of the present invention adopts continuous wave Doppler effect method.So-called continuous wave Doppler effect method is to detect staff position or movement locus by the frequency displacement that detects the continuous wave that human hand movement causes.Specifically be divided into again two kinds of implementation methods of continuous wave Doppler state method and continuous wave Doppler pattern-recongnition method.
So-called continuous wave Doppler state method refers to, by time domain and energy comparison, ask its motor pattern by the eigenwert intuitive analysis to the continuous wave reflected signal, eigenwert be near corresponding fundamental frequency than the energy of low frequency spectrum and with the energy of higher frequency band and.
so-called continuous wave Doppler pattern-recongnition method is to compare by eigenvalue matrix and the template characteristic value matrix that extracts unlike signal, the type of action that is correspondence that coincide, the extracting method of eigenvalue matrix is: the band energy around the fundamental frequency of each microphone forms a matrix as row vector time axle as column vector, first every class known action is extracted an average eigenvalue matrix, as template, then an eigenvalue matrix is extracted in each unknown action again, compare with the template of each known action, the gesture motion type of Euclidean distance minimum is likely namely the actual gesture of this action.
In order to describe audio analysis module of the present invention in detail, the present invention preferably adopts the continuous wave Doppler pattern-recongnition method to describe as specific embodiment.
In the continuous wave Doppler pattern-recongnition method, staff gesture confidence level can obtain by the following method:
At first to the known gesture motion of same class repeatedly, the average Euclidean distance in the statistics class, be made as a1;
To being judged to be such action to be measured, calculate the Euclidean distance of it and average template, be made as b1, b is less in theory, and the confidence level of this action recognition is higher;
Can be expressed by following approximate formula the confidence level of acoustic module.
Confidence level
Figure BDA00003545431100061
, the n here is total classification number of action, and the kind number of the type of action that the value of n is identified according to actual needs determines, and general n is got 1-20 kind left and right.For example, we have defined altogether type of action in 11 now: from left to right, from right to left, from the top down and the bottom up, click, draw a circle clockwise, draw a circle counterclockwise, from the centre to the left side, from the centre to the right, from the right to the centre, from the left side to the centre.So the n of this moment is exactly 11.
comprehensive judge module 12, it is used for controlling by analysis module 10 and working independently or audio analysis module 11 works independently or analysis module 10 and audio analysis module 11 are worked simultaneously according to control signal of the application program of the described gesture recognition system of application, and receive staff gesture and the gesture confidence level that described analysis module 10 and audio analysis module 11 are exported respectively, pre-defined gesture confidence level rule, when the confidence level that this rule comprises analysis module 10 or audio analysis module 11 is simultaneously high or low simultaneously or one higher one when low, with one of them gesture output of How to choose, concrete criterion does not limit, only need to define a rule and can realize that comprehensive judge module 12 can select that relatively high gesture of confidence level.Described comprehensive judge module 12 selects according to this gesture confidence level rule the staff gesture that adopts analysis module 10 or audio analysis module 11, and the staff gesture that will select is exported to described application program.
The present invention also provides a kind of gesture identification method, and the method comprises the steps:
S1, comprehensive judge module 12 receives the control signal of the application program output of the described gesture recognition systems of application, and according to this control signal, controls by analysis module 10 and work independently or audio analysis module 11 works independently or analysis module 10 and audio analysis module 11 are worked simultaneously;
S2, described comprehensive judge module 12 is controlled and is worked independently by analysis module 10, and concrete steps are as follows:
S21, described analysis module 10 is obtained image and the movement locus of electronic installation 101 outrunner's hands that this camera head 100 absorbs from a camera head 100, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse;
S22, comprehensive judge module 12 receives staff gesture and the gesture confidence level of described analysis module 10 outputs, pre-defined gesture confidence level rule, described comprehensive judge module 12 selects to adopt the staff gesture of analysis module 10 according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program;
S3, described comprehensive judge module 12 is controlled and is worked independently by audio analysis module 11, and concrete steps are as follows:
S31, audio analysis module 11, it comprises be used to the sounding module 110 of sounding with for the sound collection module 111 that receives sound, described sounding module 110 is sounded, this sound is through being received by sound collection module 111 after the reflection of staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level;
S32, comprehensive judge module 12 receives staff gesture and the gesture confidence level of described audio analysis module 11 outputs, pre-defined gesture confidence level rule, described comprehensive judge module 12 selects to adopt the staff gesture of audio analysis module 11 according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program;
S4, described comprehensive judge module 12 is controlled and is worked simultaneously by analysis module 10 and audio analysis module 11, and concrete steps are as follows:
S41, described analysis module 10 is obtained image and the movement locus of electronic installation 101 outrunner's hands that this camera head 100 absorbs from a camera head 100, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse;
S42, audio analysis module 11, it comprises be used to the sounding module 110 of sounding with for the sound collection module 111 that receives sound, described sounding module 110 is sounded, this sound is through being received by sound collection module 111 after the reflection of staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level;
S43, staff gesture and gesture confidence level that the described comprehensive judge module 12 described analysis module 10 of reception and audio analysis module 11 are exported respectively, pre-defined gesture confidence level rule, described comprehensive judge module 12 selects according to this gesture confidence level rule the staff gesture that adopts analysis module 10 or audio analysis module 11, and the staff gesture that will select is exported to described application program.
Gesture recognition system provided by the invention and recognition methods, it controls the work of analysis module 10 and audio analysis module 11 by comprehensive judge module 12 according to the abominable situation of surrounding environment, and staff gesture and the gesture confidence level of receiver, video analysis module 10 and 11 outputs of audio analysis module, and then relatively the gesture confidence level determines to select the staff gesture, so not only improve the confidence level of staff gesture, and greatly improved the application of gesture identification in varying environment.
Above-described is only preferred embodiments of the present invention, at this, should be pointed out that for the person of ordinary skill of the art, without departing from the concept of the premise of the invention, can also make improvement, but these all belongs to protection scope of the present invention.

Claims (2)

1. a gesture recognition system, is characterized in that, this system comprises:
Analysis module, it is used for obtaining from a camera head image and the movement locus of the electronic installation outrunner hand that this camera head absorbs, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse;
The audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level;
comprehensive judge module, it is used for controlling by analysis module and working independently or the audio analysis module works independently or analysis module and audio analysis module are worked simultaneously according to control signal of the application program of the described gesture recognition system of application, and receive staff gesture and the gesture confidence level that described analysis module and audio analysis module are exported respectively, pre-defined gesture confidence level rule, described comprehensive judge module selects according to this gesture confidence level rule the staff gesture that adopts analysis module or audio analysis module, and the staff gesture that will select is exported to described application program.
2. a gesture identification method, is characterized in that, the method comprises the steps:
S1, comprehensive judge module receives the control signal of the application program output of the described gesture recognition system of application, and according to this control signal, controls by analysis module and work independently or the audio analysis module works independently or analysis module and audio analysis module are worked simultaneously;
S2, described comprehensive judge module is controlled and is worked independently by analysis module, and concrete steps are as follows:
S21, described analysis module is obtained image and the movement locus of the electronic installation outrunner hand that this camera head absorbs from a camera head, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse;
S22, comprehensive judge module receives staff gesture and the gesture confidence level of described analysis module output, pre-defined gesture confidence level rule, described comprehensive judge module selects to adopt the staff gesture of analysis module according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program;
S3, described comprehensive judge module is controlled and is worked independently by the audio analysis module, and concrete steps are as follows:
S31, the audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level;
S32, comprehensive judge module receives staff gesture and the gesture confidence level of described audio analysis module output, pre-defined gesture confidence level rule, described comprehensive judge module selects to adopt the staff gesture of audio analysis module according to this gesture confidence level rule, and the staff gesture that will select is exported to described application program;
S4, described comprehensive judge module is controlled and is worked simultaneously by analysis module and audio analysis module, and concrete steps are as follows:
S41, described analysis module is obtained image and the movement locus of the electronic installation outrunner hand that this camera head absorbs from a camera head, the image of the staff that then will absorb and movement locus compare with default staff characteristics of image storehouse and movement locus feature database image feature value and the movement locus eigenwert that draws staff respectively, afterwards image feature value and movement locus eigenwert are relatively drawn staff gesture and gesture confidence level with default template characteristic value storehouse;
S42, the audio analysis module, it comprises be used to the sounding module of sounding with for the sound collection module that receives sound, described sounding module is sounded, received by the sound collection module after the reflection of this sound through staff, thereby calculate the movement locus eigenwert of staff, then described movement locus eigenwert and default template characteristic value storehouse are relatively drawn staff gesture and gesture confidence level;
S43, staff gesture and gesture confidence level that the described comprehensive judge module described analysis module of reception and audio analysis module are exported respectively, pre-defined gesture confidence level rule, described comprehensive judge module selects according to this gesture confidence level rule the staff gesture that adopts analysis module or audio analysis module, and the staff gesture that will select is exported to described application program.
CN201310309828.0A 2013-07-22 2013-07-22 Gesture recognition system and recognition methods Active CN103399634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310309828.0A CN103399634B (en) 2013-07-22 2013-07-22 Gesture recognition system and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310309828.0A CN103399634B (en) 2013-07-22 2013-07-22 Gesture recognition system and recognition methods

Publications (2)

Publication Number Publication Date
CN103399634A true CN103399634A (en) 2013-11-20
CN103399634B CN103399634B (en) 2016-02-24

Family

ID=49563274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310309828.0A Active CN103399634B (en) 2013-07-22 2013-07-22 Gesture recognition system and recognition methods

Country Status (1)

Country Link
CN (1) CN103399634B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463119A (en) * 2014-12-05 2015-03-25 苏州触达信息技术有限公司 Composite gesture recognition device based on ultrasound and vision and control method thereof
CN105094298A (en) * 2014-05-13 2015-11-25 华为技术有限公司 Terminal and terminal based gesture recognition method
CN106200888A (en) * 2015-05-04 2016-12-07 南昌欧菲光科技有限公司 Contactless electronic product and control method thereof
CN107894830A (en) * 2017-10-12 2018-04-10 深圳大学 A kind of interaction input method based on acoustic perceptual, system and medium
CN110119209A (en) * 2019-06-13 2019-08-13 漫谷科技股份公司 Audio device control method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1520685A (en) * 2001-06-29 2004-08-11 �ʼҷ����ֵ������޹�˾ Picture-in-picture repositioning and/or resizing based on speech and gesture control
CN1534441A (en) * 2003-03-28 2004-10-06 Dynamic feedback for gesture
US20080261693A1 (en) * 2008-05-30 2008-10-23 Sony Computer Entertainment America Inc. Determination of controller three-dimensional location using image analysis and ultrasonic communication
CN101472066A (en) * 2007-12-27 2009-07-01 华晶科技股份有限公司 Near-end control method of image viewfinding device and image viewfinding device applying the method
CN101776949A (en) * 2010-01-13 2010-07-14 北京思比科微电子技术股份有限公司 Depth detection-based body identification control device
CN102467905A (en) * 2010-10-28 2012-05-23 鸿富锦精密工业(深圳)有限公司 Gesture recognition appparatus and method
CN102981615A (en) * 2012-11-05 2013-03-20 瑞声声学科技(深圳)有限公司 Gesture recognition system and gesture recognition method
CN103038725A (en) * 2010-06-29 2013-04-10 高通股份有限公司 Touchless sensing and gesture recognition using continuous wave ultrasound signals

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1520685A (en) * 2001-06-29 2004-08-11 �ʼҷ����ֵ������޹�˾ Picture-in-picture repositioning and/or resizing based on speech and gesture control
CN1534441A (en) * 2003-03-28 2004-10-06 Dynamic feedback for gesture
CN101472066A (en) * 2007-12-27 2009-07-01 华晶科技股份有限公司 Near-end control method of image viewfinding device and image viewfinding device applying the method
US20080261693A1 (en) * 2008-05-30 2008-10-23 Sony Computer Entertainment America Inc. Determination of controller three-dimensional location using image analysis and ultrasonic communication
CN101776949A (en) * 2010-01-13 2010-07-14 北京思比科微电子技术股份有限公司 Depth detection-based body identification control device
CN103038725A (en) * 2010-06-29 2013-04-10 高通股份有限公司 Touchless sensing and gesture recognition using continuous wave ultrasound signals
CN102467905A (en) * 2010-10-28 2012-05-23 鸿富锦精密工业(深圳)有限公司 Gesture recognition appparatus and method
CN102981615A (en) * 2012-11-05 2013-03-20 瑞声声学科技(深圳)有限公司 Gesture recognition system and gesture recognition method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094298A (en) * 2014-05-13 2015-11-25 华为技术有限公司 Terminal and terminal based gesture recognition method
CN105094298B (en) * 2014-05-13 2018-06-26 华为技术有限公司 Terminal and the gesture identification method based on the terminal
CN104463119A (en) * 2014-12-05 2015-03-25 苏州触达信息技术有限公司 Composite gesture recognition device based on ultrasound and vision and control method thereof
CN106200888A (en) * 2015-05-04 2016-12-07 南昌欧菲光科技有限公司 Contactless electronic product and control method thereof
CN107894830A (en) * 2017-10-12 2018-04-10 深圳大学 A kind of interaction input method based on acoustic perceptual, system and medium
WO2019071649A1 (en) * 2017-10-12 2019-04-18 深圳大学 Interactive input method, system and medium based on acoustic sensing
CN107894830B (en) * 2017-10-12 2019-07-26 深圳大学 A kind of interaction input method based on acoustic perceptual, system and medium
CN110119209A (en) * 2019-06-13 2019-08-13 漫谷科技股份公司 Audio device control method and device

Also Published As

Publication number Publication date
CN103399634B (en) 2016-02-24

Similar Documents

Publication Publication Date Title
Zhang et al. Data augmentation and dense-LSTM for human activity recognition using WiFi signal
CN103399634B (en) Gesture recognition system and recognition methods
CN104123007B (en) Multidimensional weighted 3D recognition method for dynamic gestures
CN101661329B (en) Operating control method and device of intelligent terminal
CN107300971B (en) The intelligent input method and system propagated based on osteoacusis vibration signal
CN102880865B (en) Dynamic gesture recognition method based on complexion and morphological characteristics
CN202110564U (en) Intelligent household voice control system combined with video channel
CN106899968A (en) A kind of active noncontact identity identifying method based on WiFi channel condition informations
CN110059612B (en) Position-independent gesture recognition method and system based on channel state information
Guo et al. WiReader: Adaptive air handwriting recognition based on commercial WiFi signal
CN105810213A (en) Typical abnormal sound detection method and device
CN103543826A (en) Method for recognizing gesture based on acceleration sensor
CN108182418B (en) Keystroke identification method based on multi-dimensional sound wave characteristics
CN103839257B (en) Method for detecting changes of SAR images of generalized Gaussian K&I
CN102981615B (en) Gesture identifying device and recognition methods
CN102622138B (en) Optical touch control positioning method and optical touch control positioning system
CN107464226A (en) A kind of image de-noising method based on improvement two-dimensional empirical mode decomposition algorithm
CN106685478A (en) Estimation method for frequency hopping signal parameter extracted based on signal time-frequency image information
CN103634680A (en) Smart television play control method and device
CN102169631A (en) Manifold-learning-based traffic jam event cooperative detecting method
CN103279751A (en) Eye movement tracking method on the basis of accurate iris positioning
CN105447506A (en) Gesture recognition method based on interval distribution probability characteristics
CN104503575A (en) Method for designing low-power-consumption gesture recognition circuit device
CN107220607B (en) Motion trajectory behavior identification method based on 3D stationary wavelet
CN101656070B (en) Voice detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant