CN104317386B - A kind of posture sequence finite state machine action identification method - Google Patents
A kind of posture sequence finite state machine action identification method Download PDFInfo
- Publication number
- CN104317386B CN104317386B CN201410293405.9A CN201410293405A CN104317386B CN 104317386 B CN104317386 B CN 104317386B CN 201410293405 A CN201410293405 A CN 201410293405A CN 104317386 B CN104317386 B CN 104317386B
- Authority
- CN
- China
- Prior art keywords
- action
- limbs
- user
- artis
- state machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Abstract
The invention discloses a kind of posture sequence finite state machine action identification method, the limbs node data for first obtaining Kinect sensor carries out coordinate transform, and conversion data are measured using unified space lattice model, limbs nodal coordinate system is set up;Then predefined limb action artis motion sequence is sampled and analyzed by defining limbs node diagnostic vector;The limb action track regular expression based on artis movement locus is finally set up, posture sequence finite state machine is constructed, the quick identification to predefined action is realized.Test result indicates that, this method has good autgmentability and versatility, to predefined 17 kinds of limb action recognition accuracies more than 94%, and the Recognition feedback time is less than 0.1s, disclosure satisfy that body feeling interaction application demand.
Description
Technical field
The present invention relates to human-computer interaction technology, more particularly to a kind of posture sequence finite state machine action recognition side
Method.
Background technology
In field of human-computer interaction, action recognition is the premise of body feeling interaction, and action recognition and behavior understanding are increasingly becoming people
Machine interacts the study hotspot in field[1-3].To reach effective interaction purpose, it is necessary to different interactive actions, include limbs fortune
Dynamic, gesture and static posture, are defined and recognize[4].In recent years, the action recognition application based on Kinect somatosensory technology
Very abundant is developed, but in such applications, although it can effectively track human body motion track[5-7], but identification maneuver is more single
One and recognition methods be unfavorable for extension[8-10], it would be highly desirable to research and develop a kind of action recognition mould with autgmentability and versatility
Type.
At present, the action identification method based on Kinect is more, such as event triggering method, template matching method, engineering
Learning method etc..Dexterous actions and radial type skeleton kit (the flexible action and that document [11] is mentioned
Articulated skeleton toolkit, FAAST) it is somatosensory operation between Kinect kits and application program
Middleware, main to be identified using event triggered fashion, such as angle, distance, speed event, this method amount of calculation are small, real
When property and accuracy rate are high, but event triggering method has limitation in itself, and the identification to continuous action is more difficult.Ellis etc.[12]
The balance between Activity recognition accuracy rate and delay has been inquired into, key frame example has been determined from action data sequence, so as to derive
Go out and act template, but action template only preserves the form and model of same class behavior, have ignored change.Wang etc.[13]Give
A kind of method of subset of actions combination, classifies to artis subset, and discrimination is high, but this method focuses on it is prior segmentation
Data flow rank, it is impossible to be used in ONLINE RECOGNITION is never carried out in splitting traffic.Zhao etc.[14]Propose a kind of structuring stream
The method of bone (structured streaming skeletons, SSS) characteristic matching, a spy is set up by off-line training
Dictionary and gesture model are levied, is each frame data distribution label of the action data stream of unknown action, is existed by extracting SSS features
Line prediction action type, the problem of this method can effectively solve the problem that erroneous segmentation and not enough template matches, can be from undivided number
According to ONLINE RECOGNITION, but calculating complexity is carried out in stream, the Recognition feedback time is unstable, and to each action recognition characteristics of needs dictionary
Storehouse, it is necessary to which collecting a large amount of action datas carries out off-line training, specific action identification and training during for extension type of action identification
Collect the degree of coupling higher.
Bibliography:
[1]Yu Tao.Kinect application development and actual combat:In the
most natural way to dialogue with the machine[M].Beijing:China Machine Press,
2012:46-47(in Chinese)
(Yu Tao .Kinect application and developments are under battle conditions:With most natural mode and machine dialogue [M] Beijing:Mechanical industry goes out
Version society, 2012:46-47)
[2] Wang J, Xu Z J.STV-based video feature processing for action
Recognition [J] .Signal Processing, 2013,93 (8):2151-2168
[3] Xu Guangyou, Cao Yuanyuan.Action recognition and activity
understanding:A review [J] .Journal of Image and Graphics, 2009,14 (2):189-195(in
Chinese)
(Xu Guang Yu, Cao Yuan beautiful woman action recognitions summarize [J] Journal of Image and Graphics, 2009,14 (2) with behavior understanding:
189-195)
[4] Van den Bergh M, Carton D, De Nijs R, et al.Real-time 3D hand gesture
interaction with a robot for understanding directions from humans[C]//
Proceedings of Robot and Human Interactive Communication.Los Alamitos:IEEE
Computer Society Press,2011:357-362
[5] Zhang Q S, Song X, Shao X W, et al.Unsupervised skeleton extraction
and motion capture from 3D deformable matching[J].Neurocomputing,2013,100:
170-182
[6] Shotton J, Sharp T, Kipman A, et al.Real-time human pose recognition
In parts from single depth images [J] .Communications of the ACM, 2013,56 (1):
116-124
[7] El-laithy R A, Huang J, Yeh M.Study on the use of Microsoft Kinect
for robotics applications[C]//Proceedings of Position Location and Navigation
Symposium.Los Alamitos:IEEE Computer Society Press,2012:1280-1288
[8] Oikonomidis I, Kyriazis N, Argyros A.Efficient model-based 3D
tracking of hand articulations using Kinect[C]//Proceedings of the 22nd
British Machine Vision Conference.British:BMVA Press,2011:1-11
[9] Shen Shihong, Li Weiqing.Research on Kinect-based gesture
recognition system[C]//Proceedings of 8th Harmonious Human Machine Environment
Conference CHCI.Beijing:Tsinghua University Press,2012:55-62(in Chinese)
(research [C] // 8th harmony of body-sensing gesture identifying systems of the clear of Shen Shihong, Li Wei based on Kinect is man-machine
The academic conference of environment joint (HHME2012) collection of thesis CHCI. Beijing:Publishing house of Tsing-Hua University, 2012:55-62)
[10] Soltani F, Eskandari F, Golestan S.Developing a gesture-based game
for deaf/mute people using Microsoft Kinect[C]//Proceedings of 2012Sixth
International Conference on Complex,Intelligent and Software Intensive
Systems(CISIS).Los Alamitos:IEEE Computer Society Press,2012:491-495
[11]Suma E A,Krum D M,Lange B,et al.Adapting user interfaces for
gestural interaction with the flexible action and articulated skeleton
Toolkit [J] .Computers&Graphics, 2013,37 (3):193-201
[12]Ellis C,Masood S Z,Tappen M F,et al.Exploring the trade-off
between accuracy and observational latency in action recognition[J]
.International Journal of Computer Vision,2013,101(3):420-436
[13]Wang J,Liu Z,Wu Y,et al.Mining actionlet ensemble for action
recognition with depth cameras[C]//Proceedings of Computer Vision and Pattern
Recognition(CVPR).Los Alamitos:IEEE Computer Society Press,2012:1290-1297
[14]Zhao X,Li X,Pang C,et al.Online human gesture recognition from
motion data streams[C]//Proceedings of the 21st ACM International Conference
on Multimedia.New York:ACM Press,2013:23-32
[15]Biswas K K,Basu S K.Gesture recognition using Microsoft Kinect
[C]//Proceedings of 20115th International Conference on Automation,Robotics
and Applications(ICARA).Los Alamitos:IEEE Computer Society Press,2011:100-103
[16] Zhang Yi, Zhang Shuo, Luo Yuan, et al.Gesture track recognition
based on Kinect depth image information and its applications[J].Application
Research of Computers, 2012,29 (9):3547-3550(in Chinese)
(gesture track recognition and application [J] of the such as Zhang Yi, Zhang Shuo, Luo Yuan based on Kinect depth image information are calculated
Machine application study, 2012,29 (9):3547-3550)
[17] Chaquet J M, Carmona E J, Fernandez-Caballero A.A survey of video
datasets for human action and activity recognition[J].Computer Vision and
Image Understanding, 2013,117 (6):633-659
The content of the invention
The technical problems to be solved by the invention are to provide a kind of posture sequence finite state in view of the shortcomings of the prior art
Machine action identification method.
Technical scheme is as follows:
A kind of posture sequence finite state machine action identification method, first, the limbs node that body feeling interaction equipment is obtained
Data transform to the limbs nodal coordinate system of customer-centric, by defining limbs node diagnostic vector, to limb action sequence
Row carry out sampling analysis, set up predefined movement locus regular expression, construct posture sequence finite state machine, so as to realize pair
The parsing and identification of predefined limb action.
Described posture sequence finite state machine action identification method, described limbs nodal coordinate system method for building up is fixed
Adopted user's space coordinate system is:It is y-axis positive direction using user's right-hand direction as x-axis positive direction, directly over head, interaction is set
Standby front is z-axis positive direction, and two shoulder centers are the origin of coordinates;Coordinate points P (x, y, z) under Kinect space coordinates oxyz
With the coordinate points P'(x', y', z' under user's space coordinate system o'x'y'z') transformation relation can be described as formula (1).
In formula (1), O'(x0,y0,z0) represent user's space coordinate system o'x'y'z' origin, θ be user relative to
The anglec of rotation of sensor xoy planes, θ=arctan ((xr-xl)/(zr-zl)), wherein xr>xl, -45 °<θ<+45°;
Metric unit is described as cubic unit volume mesh under user coordinate system, needs to consider it for the user of different heights
The ratio corresponding relation of height and limbs length, limb action is described with unified mode;Pass through the coordinate system of formula (1)
After conversion, the distinctive space lattice model of active user is set up in user coordinate system, grid model is divided into w3Three-dimensional
Cube grid, w is that one-dimensional grid divides number, value w=11;
Under user coordinate system, centered on origin, ratio cut partition is carried out to the grid of three-dimensional respectively, x-axis is positive and negative
Aspect ratio is 1:1, the positive and negative aspect ratio of y-axis is 3:8, the positive and negative aspect ratio of z-axis is 6:5;By calculating unit grids
Length of side d, type of action is described with unified mode, and side length of element, unit net are defined with respect to height ratio according to user
Lattice length of side d can be described as d=h/ (w-1), wherein, h represents relative altitude of the active user under user coordinate system, and w is one-dimensional
Mesh generation number;
Setting up after three-dimensional grid partitioning model can be retouched to the region in user coordinate system with cube net case form
State, it is ensured that independent user's limbs nodal coordinate system is set up in customer-centric all the time, so as to eliminate as much as user's individual difference
It is different.
Described posture sequence finite state machine action identification method, described limbs node diagnostic vector definition, limbs
Node diagnostic vector includes artis spatial motion vectors, artis run duration interval and artis space length, limbs section
Formula (2) is shown in point feature vector V definition;
In formula (2), T represents type of action, and k (0≤k≤19) represents artis index, and (i=0,1 ... s) represent to work as i
Preceding sample frame, s represents that corresponding joint point reaches the end frame of next particular sample point, Jk iJk i+1Represent artis k from current
Sample frame i moves to next frame i+1 spatial motion vectors, Jk iRepresent space coordinate point (xs of the artis k in the i-th framek i,yk i,
zk i), Δ tk sRepresent artis k from Jk 0Coordinate points move to J by trackk sThe time interval of coordinate points, | PmPn| represent human body
Space length between particular joint point, the distance is used as the ratio characteristic verification amount in grid model;
Each artis definition space motion vector Jk iJk i+1, the direction of motion and the track of limbs node are calculated, is acted
Each step sampled point transfer duration can be with passage time interval of delta tk s=tk s-tk 0It is described, wherein, tk 0And tk sPoint
Other corresponding joint point k is at the time of every group of starting sample frame and end sample frame;Defined from formula (2), Jk i=(xk i,yk i,
zk i), Jk i+1=(xk i+1,yk i+1,zk i+1), then spatial motion vectors Jk iJk i+1It is expressed as:
| PmPn| in, PmAnd PnThe two ends artis at human body limb position is represented respectively, and m and n are represented artis respectively
Set starting and termination call number, wherein m<N, point (xj,yj,zj) space coordinate of human body limb position corresponding joint point is represented,
J (m≤j≤n-1) represents the index variables of the corresponding joint point when calculating, then the space fixed range meter between limbs joint point
Calculate formula as follows:
According to limbs node diagnostic vector parameter defined above, various interactive actions can be defined;According to metastomium
Different with kinetic characteristic that type of action is carried out into classification elaboration, the limbs node diagnostic vector of three class representatives action includes right leg
Limbs node diagnostic vector that side is kicked, the right hand draw the limbs node diagnostic vector of circle, the limbs node diagnostic of both hands horizontal development
Vector;
According to formula (2), definition can represent the limbs node diagnostic vector V (T, k), as artis k of three class representatives action
When reaching next particular sample point, end frame s is determined, and the input of current characteristic vector as state transition function is joined
Number is analyzed, and by sample frame i zero setting, waits end frame next time, then is analyzed, then zero setting, to the last a sampled point;
1) action is kicked for right leg side, the generic features data of right foot joint point (k=19) is extracted, so as to define limbs section
Point feature vector:Wherein, L is leg length;
2) for right hand rotary movement, the generic features data of right hand joint point (k=11) are extracted, so as to define limbs section
Point feature vector:Wherein D is arm length;
3) acted for both hands horizontal development, extract the generic features data of left/right hand artis (k=7,11), so that
Define limbs node diagnostic vector:
By that analogy, use the method to define limbs node diagnostic vector for other limb actions, then pass through appearance
Gesture sequence finite state machine is analyzed limbs node diagnostic vector, so as to realize action recognition.
Described posture sequence finite state machine action identification method, the posture sequence finite state machine construction, definition
Posture sequence finite state machine Λ, its five-tuple represents to see formula (3);
Λ=(S, Σ, δ, s0,F) (3)
In formula (3), S represents state set { s0,s1,…,sn,f0,f1, each specific posture state to action is retouched
State;Σ represents the limbs node diagnostic vector collection and limitation parameter alphabet of inputWherein symbolTable
Show logical not;δ is transfer function, is defined as S × Σ → S, represents that posture sequence finite state machine is transformed into from current state
Successor states;s0Represent beginning state;F={ f0,f1It is end-state set, identification success status and identification nothing are represented respectively
Effect state;
In alphabet Σ, variable u represents the corresponding all limbs node diagnostic vector V of some type of action set, special
Disperse drop field rule of the vector representation movement locus in space lattice is levied, the track of action can be constructed by domain rule
Regular expression;
Path limitation p=xyz | x ∈ [xmin,xmax],y∈[ymin,ymax],z∈[zmin,zmax] specific posture is entered
The scope control of row key point, under any circumstance beyond predefined path domain, i.e.,It is true, then is marked as invalid shape
State;
Timestamp t ∈ [tstart,tend] time of the action required for current state to successor states is shifted is defined, if
Some state of action is not transferred to follow-up effective status within the defined time, i.e.,It is true, then jumps to disarmed state;
Each action is made up of several typical static postures, the defined quantity of state of static posture correspondence, every kind of state
Amount is calculated by key point characteristic in space lattice to be obtained, and operating state transfer must is fulfilled for path limitation p and timestamp t
Condition so that identification maneuver type, understand that user mutual is intended to;Posture sequence finite state machine can be described by five-tuple
Every attribute characteristic and each step transfer process, posture sequence finite state machine running:In original state s0Under, press
First effective status s is reached according to predefined action1If, the posture of subsequent time still in predefined scope,
Reach follow-up effective status sk, by that analogy, until the state f that hits pay dirk0, i.e. identification maneuver success;Initial and any effective
Under state, if behavior is limited or timestamp scope beyond path, it is disarmed state directly to mark sequence action, that is, is recognized
Baulk.After any done state is reached, the operation of current posture sequence limited row state machine is finished, reinitialize into
The identification of next group of limb action of row.
The present invention proposes a kind of posture sequence finite state machine action identification method, realizes to the fast of predefined action
Speed identification.The inventive method describes limb action characteristic using limbs node diagnostic vector, to predefining limb action sequence
Row carry out sampling analysis, set up limb action track regular expression, construct posture sequence finite state machine, realize that limbs are moved
Recognize.The inventive method can carry out whole description to any action or gesture, it is not necessary to off-line training and study, versatility and
Autgmentability is stronger, and high to simple and continuous action recognition accuracy, and real-time is good, meets body feeling interaction application demand.
The advantage of the inventive method mainly has:1) action recognition accuracy rate is high, by 30 different heights and the bodily form
User carries out test sample in triplicate, and recognition accuracy is more than 94%;2) action recognition feedback time is fast, and actual test is known
Other feedback time is between 0.060s-0.096s, less than 0.1s;3) a large amount of action numbers need not be collected when extending type of action
According to off-line training is carried out, only need to define the track regular expression of limb action, versatility and autgmentability for specific action
By force;4) initial and done state defined in posture sequence finite state machine model proposed by the present invention, passes through limbs node special
Levy vector to be analyzed in real time, undivided action data stream can be handled;5) posture sequence finite state machine be it is a kind of with from
Scattered posture sequence is come the method that is fitted continuous action track, therefore, it is adaptable to the identification of simple and continuous action.But present invention side
Method shows not good enough in robustness, mainly due in identification process any state do not meet predefined rule and be considered as nothing
Effect state, therefore, the more sensitive specification, it is necessary to which user action is tried one's best in the range of personal style of posture sequence identification.
Brief description of the drawings
Fig. 1 is posture sequence finite state machine action identification method framework;
Fig. 2 changes for space coordinates;User rotates top view under a, user's space coordinate system, b, Kinect coordinate systems;
Fig. 3 is the schematic cross-section that space lattice is divided in xoy planes;
Fig. 4 is that the characteristic under user's space coordinate system is represented;
Fig. 5 is that three class representatives act limbs node diagnostic vector schematic diagram;A, leg action-right leg side is kicked, b, and one hand is dynamic
Work-right hand draws circle, c, double-handed exercise-both hands horizontal development;
Fig. 6 is posture sequence finite state machine prototype;
Fig. 7 is partial act track schematic diagram;
Fig. 8 is the posture sequence finite state machine of action;
Fig. 9 is that action recognition realizes effect;A leg action;B, left/right hand draws circle;C left/right hands lift d left/right hands upwards
Press downwards, e both hands are pushed away obliquely;F both hands are pushed away obliquely;G both hands horizontal development h both hands level is shunk;I left/right hands level is slided
It is dynamic;
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.
Extension and the low deficiency of recognition efficiency are difficult in order to avoid conventional body action identification method, the present invention proposes one
Plant posture sequence finite state machine action identification method.Specific limb action is considered as by multiple postures on a timeline
One group of motion sequence, i.e. posture sequence are described.The posture sequence finite state machine action identification method of proposition mainly uses limb
Body node characteristic vector describes limb action characteristic, by carrying out sampling analysis to predefined limb action sequence, builds
The track regular expression of vertical limb action, constructs posture sequence finite state machine, so as to realize by track regular expression
Analysis and identification to limb action.
1 posture sequence finite state machine action identification method
Posture sequence finite state machine action identification method framework is as shown in Figure 1.First, body feeling interaction equipment is obtained
Limbs node data transforms to the limbs nodal coordinate system of customer-centric, by defining limbs node diagnostic vector, to limb
Body action sequence carries out sampling analysis, sets up predefined movement locus regular expression, constructs posture sequence finite state machine, from
And realize the parsing and identification to predefining limb action.
1.1 limbs nodal coordinate systems are set up
In order to eliminate as much as user's individual difference, it is necessary to which the spatial description slave unit space coordinates of user action are turned
User's space coordinate system is changed to, the interactive action feature for meeting user's individual attribute is set up.The present invention defines user's space coordinate
It is to be:It is for z-axis using user's right-hand direction as x-axis positive direction, directly over head immediately ahead of y-axis positive direction, interaction equipment
Positive direction, two shoulder centers are the origin of coordinates.Due to during action recognition, user's body positive direction not necessarily with interactive device
Plane is vertical, accordingly, it would be desirable to enter line translation to user's limbs node data of acquisition, sets up user's limbs nodal coordinate system.It is empty
Between coordinate system transformation as shown in Fig. 2 Fig. 2 a describe user's space coordinate system, O' represents user's space coordinate system o'x'y'z''s
Origin, Fig. 2 b describe the top view that user is rotated around y-axis under Kinect space coordinates, L (xl,zl) represent Kinect
The left shoulder mapping point point of user in space coordinates, R (xr,zr) the right shoulder mapping point point of user is represented, θ represents that user is relative
In (- 45 ° of the anglec of rotation of equipment xoy planes<θ<+45°).
Because the limbs node data of acquisition is that minute surface is symmetrical[15], therefore, the seat under Kinect space coordinates oxyz
Coordinate points P'(x', y', z' under punctuate P (x, y, z) and user's space coordinate system o'x'y'z') transformation relation can be described as
Formula (1).
In formula (1), O'(x0,y0,z0) represent user's space coordinate system o'x'y'z' origin, θ be user relative to
The anglec of rotation of sensor xoy planes, θ=arctan ((xr-xl)/(zr-zl)), wherein xr>xl, -45 °<θ<+45°.
Metric unit is described as cubic unit volume mesh under user coordinate system, needs to consider it for the user of different heights
The ratio corresponding relation of height and limbs length, limb action is described with unified mode.Pass through the coordinate system of formula (1)
After conversion, set up in user coordinate system in the distinctive space lattice model of active user, the present invention and be divided into grid model
w3Three-dimensional cubic volume mesh (w be one-dimensional grid divide number, experience value w=11 of the present invention), space lattice is in xoy planes
Schematic cross-section is as shown in Figure 3.
As shown in Figure 3.Under user coordinate system, centered on origin, ratio stroke is carried out to the grid of three-dimensional respectively
Point, the positive and negative aspect ratio of x-axis is 1:1, the positive and negative aspect ratio of y-axis is 3:8, the positive and negative aspect ratio of z-axis is 6:5.By calculating
The length of side d of unit grids, type of action is described with unified mode, and the present invention is defined according to user with respect to height ratio
Side length of element, unit grids length of side d can be described as d=h/ (w-1), wherein, h represents phase of the active user under user coordinate system
To height, w is that one-dimensional grid divides number.
Setting up after three-dimensional grid partitioning model can be retouched to the region in user coordinate system with cube net case form
State, it is ensured that independent user's limbs nodal coordinate system is set up in customer-centric all the time, so as to eliminate as much as user's individual difference
It is different.
1.2 limbs node diagnostic vectors are defined
Action is static posture, the motion of a certain joint of human body or multiple artis in space in the state sometime put
Sequence is dynamic behaviour[16]., it is necessary to describe generic features data under user's space coordinate system before identification maneuver, general spy
Space length etc. between the three-dimensional coordinate information that data generally comprise opposed articulation point, artis spatial motion vectors, artis is levied,
The description of limb action characteristic is as shown in Figure 4.
The present invention defines limbs node diagnostic vector to describe motion characteristic data, by joining to limbs node diagnostic vector
Several calculating and analysis, realizes the identification that multiple given poses are combined with the dynamic sequence to be formed, the i.e. identification to limb action.
Limbs node diagnostic vector includes artis spatial motion vectors, artis run duration interval and artis space length, limb
Formula (2) is shown in body node characteristic vector V definition.
In formula (2), T represents type of action, and k (0≤k≤19) represents artis index, and (i=0,1 ... s) represent to work as i
Preceding sample frame, s represents that corresponding joint point reaches the end frame of next particular sample point, Jk iJk i+1Represent artis k from current
Sample frame i moves to next frame i+1 spatial motion vectors, Jk iRepresent space coordinate point (xs of the artis k in the i-th framek i,yk i,
zk i), Δ tk sRepresent artis k from Jk 0Coordinate points move to J by trackk sThe time interval of coordinate points, | PmPn| represent human body
Space length between particular joint point, the distance is used as the ratio characteristic verification amount in grid model.
Each artis definition space motion vector Jk iJk i+1, the direction of motion and the track of limbs node are calculated, is acted
Each step sampled point transfer duration can be with passage time interval of delta tk s=tk s-tk 0It is described, wherein, tk 0And tk sPoint
Other corresponding joint point k is at the time of every group of starting sample frame and end sample frame.Defined from formula (2), Jk i=(xk i,yk i,
zk i), Jk i+1=(xk i+1,yk i+1,zk i+1), then spatial motion vectors Jk iJk i+1It is expressed as:
| PmPn| in, PmAnd PnThe two ends artis at human body limb position is represented respectively, and m and n are represented artis respectively
Set starting and termination call number, wherein m<N, point (xj,yj,zj) space coordinate of human body limb position corresponding joint point is represented,
J (m≤j≤n-1) represents the index variables of the corresponding joint point when calculating, then the space fixed range meter between limbs joint point
Calculate formula as follows:
According to limbs node diagnostic vector parameter defined above, various interactive actions can be defined.According to metastomium
It is different with kinetic characteristic that type of action is subjected to classification elaboration, limbs node diagnostic vector such as Fig. 5 institutes of three class representatives action
Show, Fig. 5 a are the limbs node diagnostic vector schematic diagram that right leg side is kicked, Fig. 5 b are that the limbs node diagnostic vector of the right hand stroke circle is shown
It is intended to, Fig. 5 c are the limbs node diagnostic vector schematic diagram of both hands horizontal development.
According to formula (2), definition can represent the limbs node diagnostic vector V (T, k), as artis k of three class representatives action
When reaching next particular sample point, end frame s is determined, and the input of current characteristic vector as state transition function is joined
Number is analyzed, and by sample frame i zero setting, waits end frame next time, then is analyzed, then zero setting, to the last a sampled point.
1) action is kicked for the right leg side in Fig. 5 a, the generic features data of right foot joint point (k=19) is extracted, depending on
Artificial limbses node diagnostic vector:Wherein, L is leg length.
2) for the right hand rotary movement in Fig. 5 b, the generic features data of right hand joint point (k=11) are extracted, depending on
Artificial limbses node diagnostic vector:Wherein D is arm length.
3) acted for the both hands horizontal development in Fig. 5 c, extract the generic features number of left/right hand artis (k=7,11)
According to so as to define limbs node diagnostic vector:
By that analogy, use the method to define limbs node diagnostic vector for other limb actions, then pass through appearance
Gesture sequence finite state machine is analyzed limbs node diagnostic vector, so as to realize action recognition.
1.3 posture sequence finite state machines are constructed
There is diversity and polytropy feature for human body natural's interactive action[17], it is necessary to a kind of general and efficient side
Method identification maneuver.Each action is made up of the continuous movement locus of corresponding limbs joint point, and continuous movement locus can be by
Discrete key point is fitted, the specific posture state of each key point correspondence, by recognizing that the transfer of each state changes
Process, it is possible to achieve the judgement of action.Based on above-mentioned thought, the present invention proposes that the identification of posture sequence finite state machine method is predetermined
The limb action of justice.Posture sequence represents that one acts the one group of motion sequence described on a timeline by multiple postures, posture
Sequence finite state machine describes the transfer process between the limited state and each state of each action, present invention definition
Posture sequence finite state machine Λ, its five-tuple represents to see formula (3).
Λ=(S, Σ, δ, s0,F) (3)
In formula (3), S represents state set { s0,s1,…,sn,f0,f1, each specific posture state to action is retouched
State;Σ represents the limbs node diagnostic vector collection and limitation parameter alphabet of inputWherein symbolTable
Show logical not;δ is transfer function, is defined as S × Σ → S, represents that posture sequence finite state machine is transformed into from current state
Successor states;s0Represent beginning state;F={ f0,f1It is end-state set, identification success status and identification nothing are represented respectively
Effect state.
In alphabet Σ, variable u represents the corresponding all limbs node diagnostic vector V of some type of action set, special
Disperse drop field rule of the vector representation movement locus in space lattice is levied, the track of action can be constructed by domain rule
Regular expression.
Path limitation p=xyz | x ∈ [xmin,xmax],y∈[ymin,ymax],z∈[zmin,zmax] specific posture is entered
The scope control of row key point, under any circumstance beyond predefined path domain, i.e.,It is true, then is marked as invalid shape
State.
Timestamp t ∈ [tstart,tend] time of the action required for current state to successor states is shifted is defined, if
Some state of action is not transferred to follow-up effective status within the defined time, i.e.,It is true, then jumps to disarmed state.
Each action is made up of several typical static postures, the defined quantity of state of static posture correspondence, every kind of state
Amount is calculated by key point characteristic in space lattice to be obtained, and operating state transfer must is fulfilled for path limitation p and timestamp t
Condition so that identification maneuver type, understand that user mutual is intended to.Posture sequence finite state machine can be described by five-tuple
Every attribute characteristic and each step transfer process, state graph model such as Fig. 6 of posture sequence finite state machine running
It is shown.
In original state s0Under, reach first effective status s according to predefined action1If, the posture of subsequent time
Still in predefined scope, then follow-up effective status s is reachedk, by that analogy, until the state f that hits pay dirk0, that is, recognize dynamic
Succeed.Under initial and any effective status, if behavior is limited or timestamp scope beyond path, the sequence is directly marked
Row action is that disarmed state, i.e. identification maneuver fail.After any done state is reached, the limited column-shaped state of current posture sequence
Machine operation is finished, and reinitializes the identification for carrying out next group of limb action.Pass through the state diagram of posture sequence finite state machine
State-transition table 1 can be obtained.
The posture sequence finite state machine state-transition table of table 1
In table, k, x=0,1,2 ..., n, and k ≠ k+x≤n, n represent identification maneuver required for middle effective status it is total
Number.The posture sequence finite state machine receive domain rule be
Defined by formula (3), the action in being saved using posture sequence finite state machine to 1.2 is described.Input alphabet
Σ={ ai,bi,ci,di,ei,fi,gi,hi,mi, wherein i=0,1, each letter variable description sampled point is in space lattice mould
Associated spatial dimension, that is, put domain in type.Movement locus can be fitted using multiple domains, and point domain string is constituted pair
The discretization description of some movement locus.As shown in fig. 7, the point domain in x negative directions space is represented with subscript i=0, x positive directions
Point domain in space is represented with subscript i=1.
The three class representatives action provided in being saved for 1.2, respectively right leg side is kicked, the right hand draws circle and both hands horizontal development,
Partial act track schematic diagram is as shown in fig. 7, table 2 gives the characteristic vector point domain string of three class representatives action.
The characteristic vector point domain string of the class representative of table 2 three action
The track regular expression of action can be extracted by the characteristic vector point domain string of action.In alphabet Σ
In, Σi(1-i)=Σi∧Σ1-i, wherein i=0,1, expression is set up simultaneously along the symmetrical spatial point domain of yoz planes.The rail of action
Formula (4) is seen after mark regular expression readjusting and simplifying.
R=aici|die1-ifigihi|di(1-i)ei(1-i)hi(1-i) (4)
The finite state machine diagram of three class representatives action and its symmetrical type of action is drawn according to track regular expression (4),
As shown in figure 8, wherein, eliminating f1Disarmed state, s0For original state, skFor transition effective status, f0Status representative is subjected to
The success status of point domain string.Under initial and any effective status, if action is limited or timestamp beyond path
Scope, then it is disarmed state f directly to mark the action1, i.e. identification maneuver failure.After any done state is reached, when
The posture sequence finite state machine operation of preceding action is finished, and reinitializes the identification for carrying out next group of action behavior.
Need synchronously to optimize set of actions processing, optimized algorithm in posture sequence finite state machine running
Step is as follows:
1) initialization is possible to the set { T } of action={ " kicking right leg side ", " right hand draw justify ", " both hands level exhibition
Open " ... }, wherein each specific action has corresponded to multiple domain string expression formulas;
2) in posture sequence finite state machine running, after the transfer of each step state will it is all it is impossible act from
Excluded in set, all possible action is retained in set;
3) when reaching done state, if end-state is disarmed state, without any output, and restart;Such as
Fruit is acceptable success status, then the only element of current collection is type of action, carries out type of action output, jumps to
Step 1, circulation identification.
Finally, the semanteme and purposes of interactive action is defined by the user, user assigns new language by obtained type of action
Justice, such as left and right lift leg represent scene walkthrough, and right-hand man, which slides, represents document page turning, and both hands horizontal development, which is represented, to raise the curtain,
So as to realize body feeling interaction application.
2 experimental results and analysis
2.1 experiment tests and result
The identification of 17 kinds of limb actions is realized according to posture sequence finite state machine model proposed by the present invention and algorithm,
And be tested under the Windows 7x64 systems of Intel Xeon CPU (2.53GHz) X3440,4GB internal memory.Limbs are moved
3 are shown in Table as definition, action classification is divided into by leg action, single-handed exercise and both hands according to kinetic characteristic and the difference of metastomium
Action.
The limb action of table 3 defines table
The green point that Fig. 9 illustrates human body front in 17 kinds of predefined limb action Dynamic Recognition processes, figure represents to recognize
The partial status of journey, red point represents to recognize success status.
Experiment test is carried out to the volunteer of 30 different heights and the bodily form, every person under test enters to each limb action
Three repeated sample tests of row, 1530 act example altogether.Action recognition test result confusion matrix is shown in Table 4, wherein None
Expression is not detected by any action.
The action recognition test result confusion matrix of table 4
T | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | No |
A | 1 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
B | 0 | 1 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
C | 0 | 0 | 98 | 0. | 1. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
D | 0 | 0 | 0. | 98 | 1. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
E | 0 | 0 | 0. | 0. | 94 | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 5. |
F | 0 | 0 | 0. | 0. | 0. | 97 | 0. | 1 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 1. |
G | 0 | 0 | 0. | 0. | 0. | 0. | 97 | 0 | 1 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 1. |
H | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 1 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
I | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 1 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
J | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 1 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
K | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 1 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
L | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 2 | 0 | 0 | 0 | 96 | 0. | 0. | 0. | 0. | 0. | 1. |
M | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 3 | 0 | 0 | 0. | 95 | 0. | 0. | 0. | 0. | 1. |
N | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 96 | 0. | 0. | 0. | 3. |
O | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 94 | 0. | 0. | 5. |
P | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 2. | 95 | 0. | 2. |
Q | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 98 | 1. |
T | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | No |
A | 1 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
B | 0 | 1 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
C | 0 | 0 | 98 | 0. | 1. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
D | 0 | 0 | 0. | 98 | 1. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
E | 0 | 0 | 0. | 0. | 94 | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 5. |
F | 0 | 0 | 0. | 0. | 0. | 97 | 0. | 1 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 1. |
G | 0 | 0 | 0. | 0. | 0. | 0. | 97 | 0 | 1 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 1. |
H | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 1 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
I | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 1 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
J | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 1 | 0 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
K | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 1 | 0. | 0. | 0. | 0. | 0. | 0. | 0. |
L | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 2 | 0 | 0 | 0 | 96 | 0. | 0. | 0. | 0. | 0. | 1. |
M | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 3 | 0 | 0 | 0. | 95 | 0. | 0. | 0. | 0. | 1. |
N | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 96 | 0. | 0. | 0. | 3. |
O | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 94 | 0. | 0. | 5. |
P | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 2. | 95 | 0. | 2. |
Q | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 0 | 0 | 0 | 0 | 0. | 0. | 0. | 0. | 0. | 98 | 1. |
Test result shows that action recognition rate is more than 94%, and most of action recognition rate reaches 100%, puts down
Equal discrimination disclosure satisfy that the demand of body feeling interaction application up to 98%.Wherein, taking off from both feet (E) is in unobvious take-off amplitude
In the case of None- identified, left/right draw circle (L, M) action may be misidentified as the action that left/right hand lifts (H, I) upwards,
Both hands obliquely and push down on (N, O) action be likely to occur unidentified situation, both hands horizontal development (P) may be mistaken for
Both hands push away the action of (O) obliquely.But on the whole, method proposed by the present invention can recognize above-mentioned limb action well,
The actual test Recognition feedback time between 0.060s-0.096s, meets real-time, interactive requirement.
2.2 Experimental comparisons and analysis
Table 5 gives human motion recognition method and compared, wherein, many case-based learning methods[12], subset of actions combination side
Method[13]With SSS feature matching methods[14]The data set used is MSR-Action3D Dataset, the test number that the present invention is used
According to 1530 action examples of the collection for 30 actual users.Recognition accuracy is " low " below 0.6 in experiment, 0.6~0.8
Between for " in ", between 0.8~1.0 be " height ".
The human motion recognition method of table 5 compares
Many case-based learning methods[12]Key frame example is determined from action data sequence, so that action template is derived, but
Action template only preserves the form and model of same class behavior, have ignored change, shows general in robustness and real-time;It is dynamic
Make sub-combinations method[13]Artis subset is classified, discrimination is higher, but this method focuses on it is the number split in advance
According to stream rank, it is impossible to be used in ONLINE RECOGNITION is never carried out in splitting traffic, although robustness is higher, but calculate complicated, in real time
Property is poor;SSS feature matching methods[14]One characteristics dictionary and gesture model are set up by off-line training, are unknown action
Each frame data distribution label of action data stream, can be from undivided number by extracting SSS feature on-line prediction type of action
According to ONLINE RECOGNITION is carried out in stream, robustness is higher, but calculates complicated, and the Recognition feedback time is unstable, and (- 1.5s represents 1.5s in advance
Identification ,+1.5s represents to delay 1.5s identifications).Three of the above method all employs machine learning and template matching technique is realized, should
Class algorithm is to each action recognition characteristics of needs dictionary library, it is necessary to collect a large amount of action numbers during for extension type of action identification
Higher with the training set degree of coupling to specific action identification according to off-line training is carried out, therefore, autgmentability is general.FAAST action recognitions
Method[11]It is identified, this method amount of calculation is small, real-time is good, is extended using event triggered fashions such as angle, distance, speed
Property it is stronger, for defined simple motion, recognition accuracy is high, but because event triggering technique has limitation, Shandong in itself
Rod is relatively low, and continuous action is recognized more difficult.
It should be appreciated that for those of ordinary skills, can according to the above description be improved or converted,
And all these modifications and variations should all belong to the protection domain of appended claims of the present invention.
Claims (3)
1. a kind of posture sequence finite state machine action identification method, it is characterised in that first, body feeling interaction equipment is obtained
Limbs node data transforms to the limbs nodal coordinate system of customer-centric, by defining limbs node diagnostic vector, to limb
Body action sequence carries out sampling analysis, sets up predefined movement locus regular expression, constructs posture sequence finite state machine, from
And realize the parsing and identification to predefining limb action;Described limbs nodal coordinate system method for building up, defines user's space
Coordinate system is:It is to be using user's right-hand direction as x-axis positive direction, directly over head immediately ahead of y-axis positive direction, interaction equipment
Z-axis positive direction, two shoulder centers are the origin of coordinates;Coordinate points P (x, y, z) and user's space under Kinect space coordinates oxyz
Coordinate points P'(x', y', z' under coordinate system o'x'y'z') transformation relation can be described as formula (1):
In formula (1), O'(x0,y0,z0) user's space coordinate system o'x'y'z' origin is represented, θ is user relative to sensing
The anglec of rotation of device xoy planes, θ=arctan ((xr-xl)/(zr-zl)), wherein xr>xl, -45 °<θ<+45°;
Metric unit is described as cubic unit volume mesh under user coordinate system, needs to consider its height for the user of different heights
With the ratio corresponding relation of limbs length, limb action is described with unified mode;Pass through the coordinate system transformation of formula (1)
Afterwards, the distinctive space lattice model of active user is set up in user coordinate system, grid model is divided into w3Three-dimensional cubic
Volume mesh, w is that one-dimensional grid divides number, value w=11;
Under user coordinate system, centered on origin, ratio cut partition, the positive negative direction of x-axis are carried out to the grid of three-dimensional respectively
Ratio is 1:1, the positive and negative aspect ratio of y-axis is 3:8, the positive and negative aspect ratio of z-axis is 6:5;By the length of side for calculating unit grids
D, type of action is described with unified mode, and side length of element, unit grids side are defined with respect to height ratio according to user
Long d can be described as d=h/ (w-1), wherein, h represents relative altitude of the active user under user coordinate system, and w is one-dimensional grid
Divide number;
Setting up after three-dimensional grid partitioning model, the region in user coordinate system can be described with cube net case form, be protected
Independent user's limbs nodal coordinate system is set up in card customer-centric all the time.
2. posture sequence finite state machine action identification method according to claim 1, it is characterised in that described limbs
Node diagnostic vector definition, limbs node diagnostic vector include artis spatial motion vectors, artis run duration interval and
Formula (2) is shown in artis space length, limbs node diagnostic vector V definition;
In formula (2), T represents type of action, and k (0≤k≤19) represents artis index, and (i=0,1 ... s) represent currently to adopt i
Sample frame, s represents that corresponding joint point reaches the end frame of next particular sample point, Jk iJk i+1Represent artis k from present sample
Frame i moves to next frame i+1 spatial motion vectors, Jk iRepresent space coordinate point (xs of the artis k in the i-th framek i,yk i,zk i),
Δtk sRepresent artis k from Jk 0Coordinate points move to J by trackk sThe time interval of coordinate points, | PmPn| represent that human body is specific
Space length between artis, the distance is used as the ratio characteristic verification amount in grid model;
Each artis definition space motion vector Jk iJk i+1, calculate the direction of motion and the track of limbs node, action it is every
One step sampled point transfer duration can be with passage time interval of delta tk s=tk s-tk 0It is described, wherein, tk 0And tk sIt is right respectively
Artis k is answered at the time of every group of starting sample frame and end sample frame;Defined from formula (2), Jk i=(xk i,yk i,zk i),
Jk i+1=(xk i+1,yk i+1,zk i+1), then spatial motion vectors Jk iJk i+1It is expressed as:
| PmPn| in, PmAnd PnThe two ends artis at human body limb position is represented respectively, and m and n are represented joint point set respectively
Starting and termination call number, wherein m<N, point (xj,yj,zj) represent the space coordinate of human body limb position corresponding joint point, j (m
≤ j≤n-1) represent the index variables of the corresponding joint point when calculating, then the space fixed range between limbs joint point is calculated
Formula is as follows:
According to limbs node diagnostic vector parameter defined above, various interactive actions can be defined;According to metastomium and fortune
Type of action is carried out classification elaboration by the different of dynamic characteristic, and the limbs node diagnostic vector of three class representatives action is kicked including right leg side
Limbs node diagnostic vector, the right hand draw circle limbs node diagnostic vector, the limbs node diagnostic vector of both hands horizontal development;
According to formula (2), definition can represent the limbs node diagnostic vector V (T, k), when artis k is reached of three class representatives action
During next particular sample point, end frame s is determined, and is entered current characteristic vector as the input parameter of state transition function
Row analysis, by sample frame i zero setting, waits end frame next time, then analyzes, then zero setting, to the last a sampled point;
1) action is kicked for right leg side, extracts the generic features data of right foot joint point (k=19), so that it is special to define limbs node
Levy vector:Wherein, L is leg length;
2) for right hand rotary movement, the generic features data of right hand joint point (k=11) are extracted, so that it is special to define limbs node
Levy vector:Wherein D is arm length;
3) acted for both hands horizontal development, the generic features data of left/right hand artis (k=7,11) are extracted, so as to define
Limbs node diagnostic vector:
By that analogy, use the method to define limbs node diagnostic vector for other limb actions, then pass through posture sequence
Row finite state machine is analyzed limbs node diagnostic vector, so as to realize action recognition.
3. posture sequence finite state machine action identification method according to claim 1, it is characterised in that the posture sequence
Limit state machine construction is shown, posture sequence finite state machine Λ is defined, its five-tuple represents to see formula (3);
Λ=(S, Σ, δ, s0,F) (3)
In formula (3), S represents state set { s0,s1,…,sn,f0,f1, each specific posture state to action is described;
Σ represents the limbs node diagnostic vector collection and limitation parameter alphabet of inputWherein symbolRepresent that logic is no
It is fixed;δ is transfer function, is defined as S × Σ → S, represents that posture sequence finite state machine is transformed into successor states from current state;
s0Represent beginning state;F={ f0,f1It is end-state set, identification success status and identification disarmed state are represented respectively;
In alphabet Σ, variable u represents the corresponding all limbs node diagnostic vector V of some type of action set, feature to
Amount represents disperse drop field rule of the movement locus in space lattice, and the track canonical of action can be constructed by domain rule
Expression formula;
Path limitation p=xyz | x ∈ [xmin,xmax],y∈[ymin,ymax],z∈[zmin,zmax] specific posture is closed
The scope control of key point, under any circumstance beyond predefined path domain, i.e.,It is true, then is marked as disarmed state;
Timestamp t ∈ [tstart,tend] time of the action required for current state to successor states is shifted is defined, if action
Some state be not transferred to follow-up effective status within the defined time, i.e.,It is true, then jumps to disarmed state;
Each action is made up of several typical static postures, the defined quantity of state of static posture correspondence, every kind of quantity of state by
Key point characteristic is calculated in space lattice and obtained, and operating state transfer must is fulfilled for the bar that path limits p and timestamp t
Part, so that identification maneuver type, understands that user mutual is intended to;The each of posture sequence finite state machine can be described by five-tuple
Item attribute characteristic and each step transfer process, posture sequence finite state machine running:In original state s0Under, according to pre-
The action of definition reaches first effective status s1If the posture of subsequent time reaches still in predefined scope
Follow-up effective status sk, by that analogy, until the state f that hits pay dirk0, i.e. identification maneuver success;In initial and any effective status
Under, if behavior is limited or timestamp scope beyond path, it is disarmed state, i.e. identification maneuver directly to mark the posture sequence
Failure, after any done state is reached, the limited row state machine operation of current posture sequence is finished, and is reinitialized under progress
The identification of one group of limb action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410293405.9A CN104317386B (en) | 2014-06-25 | 2014-06-25 | A kind of posture sequence finite state machine action identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410293405.9A CN104317386B (en) | 2014-06-25 | 2014-06-25 | A kind of posture sequence finite state machine action identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104317386A CN104317386A (en) | 2015-01-28 |
CN104317386B true CN104317386B (en) | 2017-08-04 |
Family
ID=52372625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410293405.9A Active CN104317386B (en) | 2014-06-25 | 2014-06-25 | A kind of posture sequence finite state machine action identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104317386B (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104765454A (en) * | 2015-04-02 | 2015-07-08 | 吉林大学 | Human muscle movement perception based menu selection method for human-computer interaction interface |
CN105512621B (en) * | 2015-11-30 | 2019-04-09 | 华南理工大学 | A kind of shuttlecock action director's system based on Kinect |
CN105534528B (en) * | 2015-12-08 | 2018-03-23 | 杭州电子科技大学 | A kind of contactless physical fitness test system and method for testing based on somatosensory recognition |
US10218882B2 (en) * | 2015-12-31 | 2019-02-26 | Microsoft Technology Licensing, Llc | Feedback for object pose tracker |
US10599919B2 (en) * | 2015-12-31 | 2020-03-24 | Microsoft Technology Licensing, Llc | Detection of hand gestures using gesture language discrete values |
CN106445138A (en) * | 2016-09-21 | 2017-02-22 | 中国农业大学 | Human body posture feature extracting method based on 3D joint point coordinates |
CN106485055B (en) * | 2016-09-22 | 2017-09-29 | 吉林大学 | A kind of old type 2 diabetes patient's athletic training system based on Kinect sensor |
CN106600626B (en) * | 2016-11-01 | 2020-07-31 | 中国科学院计算技术研究所 | Three-dimensional human motion capture method and system |
CN106682594A (en) * | 2016-12-13 | 2017-05-17 | 中国科学院软件研究所 | Posture and motion identification method based on dynamic grid coding |
CN107080940A (en) * | 2017-03-07 | 2017-08-22 | 中国农业大学 | Body feeling interaction conversion method and device based on depth camera Kinect |
CN107203271B (en) * | 2017-06-08 | 2020-11-24 | 华南理工大学 | Double-hand recognition method based on multi-sensor fusion technology |
CN107679522B (en) * | 2017-10-31 | 2020-10-13 | 内江师范学院 | Multi-stream LSTM-based action identification method |
CN108542021A (en) * | 2018-03-18 | 2018-09-18 | 江苏特力威信息系统有限公司 | A kind of gym suit and limbs measurement method and device based on vitta identification |
CN108961867A (en) * | 2018-08-06 | 2018-12-07 | 南京南奕亭文化传媒有限公司 | A kind of digital video interactive based on preschool education |
CN110532874B (en) * | 2019-07-23 | 2022-11-11 | 深圳大学 | Object attribute recognition model generation method, storage medium and electronic device |
CN112101242A (en) * | 2020-09-17 | 2020-12-18 | 四川轻化工大学 | Body action recognition method based on posture sequence state chain |
CN112788390B (en) * | 2020-12-25 | 2023-05-23 | 深圳市优必选科技股份有限公司 | Control method, device, equipment and storage medium based on man-machine interaction |
CN112560817B (en) * | 2021-02-22 | 2021-07-06 | 西南交通大学 | Human body action recognition method and device, electronic equipment and storage medium |
CN114093024A (en) * | 2021-09-24 | 2022-02-25 | 张哲为 | Human body action recognition method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6552729B1 (en) * | 1999-01-08 | 2003-04-22 | California Institute Of Technology | Automatic generation of animation of synthetic characters |
US7036094B1 (en) * | 1998-08-10 | 2006-04-25 | Cybernet Systems Corporation | Behavior recognition system |
CN102500094A (en) * | 2011-10-28 | 2012-06-20 | 北京航空航天大学 | Kinect-based action training method |
CN103489000A (en) * | 2013-09-18 | 2014-01-01 | 柳州市博源环科科技有限公司 | Achieving method of human movement recognition training system |
-
2014
- 2014-06-25 CN CN201410293405.9A patent/CN104317386B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7036094B1 (en) * | 1998-08-10 | 2006-04-25 | Cybernet Systems Corporation | Behavior recognition system |
US6552729B1 (en) * | 1999-01-08 | 2003-04-22 | California Institute Of Technology | Automatic generation of animation of synthetic characters |
CN102500094A (en) * | 2011-10-28 | 2012-06-20 | 北京航空航天大学 | Kinect-based action training method |
CN103489000A (en) * | 2013-09-18 | 2014-01-01 | 柳州市博源环科科技有限公司 | Achieving method of human movement recognition training system |
Also Published As
Publication number | Publication date |
---|---|
CN104317386A (en) | 2015-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104317386B (en) | A kind of posture sequence finite state machine action identification method | |
Kumar et al. | A multimodal framework for sensor based sign language recognition | |
Gu et al. | Human gesture recognition through a kinect sensor | |
CN103246891B (en) | A kind of Chinese Sign Language recognition methods based on Kinect | |
Amor et al. | Action recognition using rate-invariant analysis of skeletal shape trajectories | |
Ibraheem et al. | Survey on various gesture recognition technologies and techniques | |
Mekala et al. | Real-time sign language recognition based on neural network architecture | |
Chen et al. | A real-time dynamic hand gesture recognition system using kinect sensor | |
CN105807926A (en) | Unmanned aerial vehicle man-machine interaction method based on three-dimensional continuous gesture recognition | |
CN106502390B (en) | A kind of visual human's interactive system and method based on dynamic 3D Handwritten Digit Recognition | |
Gu et al. | Multiple stream deep learning model for human action recognition | |
Linqin et al. | Dynamic hand gesture recognition using RGB-D data for natural human-computer interaction | |
CN111444488A (en) | Identity authentication method based on dynamic gesture | |
Bamwenda et al. | Recognition of static hand gesture with using ANN and SVM | |
Xu et al. | Robust hand gesture recognition based on RGB-D Data for natural human–computer interaction | |
Dhore et al. | Human Pose Estimation And Classification: A Review | |
Niranjani et al. | System application control based on Hand gesture using Deep learning | |
Trigueiros et al. | Vision-based gesture recognition system for human-computer interaction | |
Thomas et al. | A comprehensive review on vision based hand gesture recognition technology | |
Saykol et al. | Posture labeling based gesture classification for Turkish sign language using depth values | |
CN113807280A (en) | Kinect-based virtual ship cabin system and method | |
Li et al. | Feature Point Matching for Human-Computer Interaction Multi-Feature Gesture Recognition Based on Virtual Reality VR Technology | |
Sorel et al. | Dealing with variability when recognizing user's performance in natural 3D gesture interfaces | |
Lu et al. | Dynamic hand gesture recognition using HMM-BPNN model | |
Joseph | Recent Trends and Technologies in Hand Gesture Recognition. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |