CN104808790A - Method of obtaining invisible transparent interface based on non-contact interaction - Google Patents

Method of obtaining invisible transparent interface based on non-contact interaction Download PDF

Info

Publication number
CN104808790A
CN104808790A CN201510163408.5A CN201510163408A CN104808790A CN 104808790 A CN104808790 A CN 104808790A CN 201510163408 A CN201510163408 A CN 201510163408A CN 104808790 A CN104808790 A CN 104808790A
Authority
CN
China
Prior art keywords
transparent interface
interface
gesture
centre
transparent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510163408.5A
Other languages
Chinese (zh)
Other versions
CN104808790B (en
Inventor
冯仕昌
冯志全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510163408.5A priority Critical patent/CN104808790B/en
Publication of CN104808790A publication Critical patent/CN104808790A/en
Application granted granted Critical
Publication of CN104808790B publication Critical patent/CN104808790B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a method of obtaining an invisible transparent interface based on non-contact interaction. The method is characterized comprising the following steps: (1) inputting and initializing a gesture operating video streaming, setting k to be equal to 1, and obtaining a kth frame image; (2) calculating a gesture indicating position, the length, width and height of a transparent interface and the gravity of a transparent interface at the current time k; (3) FORMULA, calculating Pk(p1, p2 and p3); (4) if the existing gesture position is separated from an area scope of the existing transparent interface, relocating the position and size of the transparent interface, and transferring to step (2); (5) refreshing a gravity position (6) of the transparent interface, and brushing the length of the transparent interface in length, width and height directions; (7) judging whether the existing operation belongs to 2D (Two-Dimensional) operation or belongs to 3D (Three-dimensional) operation; (8) if a transparent interface structure is tend to be stable, outputting the position, size and structure information of the transparent interface.

Description

A kind of method based on the invisible transparent interface of contactless mutual acquisition
Technical field
The present invention relates to virtual gesture field operation, specifically, relate to a kind of method based on the invisible transparent interface of contactless mutual acquisition.
Background technology
TV is becoming one and is carrying out mutual hinge with many contents.These are large-scale, high-resolution display can be used to browse digital photograph, select music, play games, program of seeing a film and watch TV.Present many TVs are connected with internet, and allow access online content and Social Media, this further causes the development of the new product as apple TV and Google's TV, they add the quantity from TV screen obtaining information and complicacy.Domestic pleasure is looked with millet also very fiery, and wherein pleasure is in first depending on having surmounted the Traditional Brand such as Hisense, Changhong after internet television last October, and it has interactive voice and the intelligent remote controller based on touch operation.In many cases, TV remote controller is a limiting factor inherently, and it only provides simple peg button to come to carry out alternately, lacking the dirigibility of mouse and gesture interaction with TV usually.NasserH.Dardas and Mohammad Alhaj utilizes Gesture Recognition to produce control command, utilize these control commands to control the motion of object in game, in this gesture recognition system, utilize word bag technique and support vector machine technology mutual to what realize between user and computing machine.The people such as Werner, when not violating user psychology model, control to show the needs adapting to active user by implicit expression adjustment, solve the precision problem of remote interaction pointing device.The people such as Joe Weakliam propose CoMPASS system, and it does not need the explicit input from user, but when user browses map particular space content according to the interested region of characteristic sum, the implicit expression operation of supervisory user.This system can analyze the implicit expression behavior of user, and is used for setting up user model by analysis result.The people such as Kaori Fujinami are dissolved into enhancing technology in general fit calculation, adopt the method for nature or implicit expression to obtain user environment information, and user does not need to learn how obtaining information, has filled up the gap between user and complicated calculations environment.The people such as Paul Dietz propose the implicit interactions technology based on multi-projector, they using projector as real-time output device, when user enters infrared spectral range, system implicitly can be paid close attention to user and viewing area display user, cyclically show related content by cartoon technology simultaneously.The people such as Stavros Antifakos devise the implicit interactions example of " intelligence " object that is synchronized with the movement towards non-accident, and system implied addressing controls when unlocking the door.
Summary of the invention:
The technical problem to be solved in the present invention is to provide a kind of method based on the invisible transparent interface of contactless mutual acquisition, understand fully the space function distribution of transparent interface and personal behavior model and the relation experienced between joyful sense, verify the Mechanism of Cognition based on the transparent perception of transparent interface and behavioural characteristic, for three-dimensional user interface system, virtual reality system, animation Games Software system, especially for the design at the interfaces such as interactive intelligent Digital Television, 3D game provide more intelligent, hommization, naturalized interaction paradigm.
The present invention adopts following technical scheme to realize goal of the invention:
Based on a method for the invisible transparent interface of contactless mutual acquisition, it is characterized in that: comprise the steps:
(1) initialization, arranges k=1, L (k, i)=0, i=1,2,3, L (k, i)represent the length of transparent interface, when | H 1-H 2| during < τ, namely when staff keeps motionless substantially, τ is a non-negative empirical parameter, P k(x, y, z) =(H 1+ H 2)/2, wherein, H 1and H 2represent by the gesture centre of gravity place that Kinect device interface obtains between adjacent two frames respectively, O k(x, y, z)=P k(x, y, z), O k(x, y, z) is the centre of gravity place of k moment transparent interface;
(2) k=k+1, calculates P k(x, y, z), P kthe gesture centre of gravity place that (x, y, z) is current time k;
(3) if position of human center is moved, then position and the size of transparent interface is reorientated, that is for empirical parameter β, if:
|C k(x,y,z)-C k-1(x,y,z)|>β (1)
Then turn (1) step, C krepresent the centre of gravity place of human body;
(4) centre of gravity place at refresh transparency interface,
O k(o 1,o 2,o 3)=(nO k-1(x,y,z)+P k(x,y,z))/(n+1) (2)
Wherein, n represents in transparent interface the number of the tracing point participating in the gesture centre of gravity place added up;
(5) refresh transparency interface is along the length in length three directions:
L (k,i)=max(L (k-1,i),2*|p k(x,y,z)-o k(x,y,z)|)
(2)
Wherein, i=1,2,3;
(6) judge that current operation belongs to the operation of the 2D operational zone of physical interface or belongs to the operation of 3D operational zone;
(7) if transparent interface structure tends towards stability, namely
(||L (k,i)-L (k-1,i)||<μ)and(||O k(x,y,z)-O k-1(x,y,z)||<μ)
(4)
Then export the position of transparent interface, size and structural information, μ is the constant of setting in advance, otherwise, turn step (2).
As the further restriction to the technical program, described step (7) comprises the steps:
(7.1) if the difference of the gesture centre of gravity place of current kth frame and the centre of gravity place of transparent interface exceedes defined threshold, then think that current operation is the operation of 3D operational zone; Otherwise current operation is identified as the operation of 2D operational zone, that is:
If
λ<||P k(x,y,z)-o k(x,y,z)||<L 3(3)
Then current gesture operation is 2D operation, otherwise current gesture operation is 3D operation, λ, L 3for constant;
(7.2) refresh 2DR and 3DR:2DR and 3DR and refer to 2 dimensional region on display and 3D region respectively,
Ask the maximal encasing box of 2DR and 3DR respectively, obtain (2DR_LB, 2DR_RT) with (3DR_LB, 3DR_RT, 2DR_LB refers to the position, the lower left corner in 2DR region, and 2DR_RT refers to the position, the upper right corner in 2DR region, and 3DR_LB, 3DR_RT refer to two diagonal positions in 3DR region and the lower left corner and position, the upper right corner respectively, thus, determine position and the size in 2DR and 3DR region.
As the further restriction to the technical program, described transparent interface refers to the mutual induction zone of the 3D between user and physical interface of computing machine perception.
As the further restriction to the technical program, described physical interface refers to the display screen of display.
As the further restriction to the technical program, described transparent interface is horizontal rectangle.
As the further restriction to the technical program, described transparent interface is vertical rectangle.
As the further restriction to the technical program, described transparent interface is semi-cylindrical.
As the further restriction to the technical program, described physical interface is divided into 2D operational zone and 3D operational zone.
As the further restriction to the technical program, the calculation procedure of the position of human center of described step (3) is:
(3.1) skeleton coordinate is obtained by Kinect device;
(3.2) with left and right knee joint coordinate (A 1(x, y, z), A 2(x, y, z)), left and right hip joint coordinate (A 3(x, y, z), A 4(x, y, z)), left and right shoulder joint coordinate (A 5(x, y, z), A 6(x, y, z)), left and right articulatio sternoclavicularis coordinate (A 7(x, y, z), A 8(x, y, z)) and mandibular joint (A 9(x, y, z)) mean value replace position of human center C k(x, y, z), namely
C k ( x , y , z ) = 1 9 &Sigma; i = 1 9 A i ( x , y , z )
Wherein, A i (x, y, z)represent joint position, this position directly can obtain from Kinect device DLL (dynamic link library), and it represents with (x, y, z) three-dimensional coordinate.
As the further restriction to the technical program, the calculation procedure of described step (2) and (3) gesture centre of gravity place is:
(2.1) according to the colour of skin and depth value, images of gestures is split from background image the video frame images obtained from the k moment, obtain the RGB bitmap I of gesture;
(2.2) in bitmap I, calculate the two-dimentional centre of gravity place (x, y) of images of gestures;
(2.3), in the DLL (dynamic link library) provided according to Kinect device, read the depth value z that (x, y) is corresponding, thus obtain P k(x, y, z).
Compared with prior art, advantage of the present invention and good effect are: the present invention gives gesture and handles physical space with the attribute of information space, and the mutual induction zone of structuring, reaches the object that physical space (transparent interface) " reality " is changed; By setting up the corresponding relation of physical interface and transparent interface, reach the object that physical interface " void " is changed.Like this, on the one hand, real person's void, deficiency syndrome should be treated with reinforcing method, realizes the information enhancement of both physical interface and transparent interface and seamless fusion, the distance of " furthering " user and physical interface, makes it the mental model of more approaching user; On the other hand, build implicit interactions interface paradigm by transparent interface, be expected to for the part hot issue in man-machine interaction or difficulties find new solution.Such as, likely for Midas Touch, gesture operation are tired, mix the problems such as transparent switching between multi-modal gesture, new breakthrough mouth is provided.Understand fully the space function distribution of transparent interface and personal behavior model and the relation experienced between joyful sense, verify the Mechanism of Cognition based on the transparent perception of transparent interface and behavioural characteristic, for three-dimensional user interface system, virtual reality system, animation Games Software system, especially for the design at the interfaces such as interactive intelligent Digital Television, 3D game provide more intelligent, hommization, naturalized interaction paradigm.
Accompanying drawing explanation
Fig. 1 is transparent interface structural drawing of the present invention.
The schematic diagram of Fig. 2 to be the present invention be mapping from transparent interface to physical interface.
The schematic diagram of Fig. 3 to be transparent interface of the present invention be plane.
The schematic diagram of Fig. 4 to be transparent interface of the present invention be circular arc.
Fig. 5 is the drop shadow effect figure of transparent interface of the present invention.
Embodiment:
Below in conjunction with embodiment, further illustrate the present invention.
One, key concept
(1) transparent interface
By analyzing the behavior of user's gesture operation, computing machine is by the mutual induction zone of the 3D of transparent perception between user and physical interface, body position and attitude with user is followed in sb's footsteps by this region, just look like the function screen having cannot see near user's gesture but " follow in sb's footsteps " with user's gesture motion, this user interactions induction zone with ad hoc structure and function is called (invisible) transparent interface by the present invention.
The meaning of transparent interface is, it not only reflects the gesture operation behavior model of user from a side, and feature the mental model of user, and the contactless mutual interface paradigm of specification, make contactless interactive interface be can calculate, appreciable.In this sense, transparent interface is united alternately well by mutual and contactless for contact.
(2) physical interface
The display screen of display is called physical interface by the present invention, physical interface is provided with 3D operational zone and 2D operational zone.
(3) physical space
Gesture operation space between user and physical interface is called physical space.A kind of possible transparent interface structural drawing is given in Fig. 1.In this interface, different functional areas distributions on different layers, different layers can arrange different sub-blocks.
Two, the structure of transparent interface
The acquisition of 2.1 transparent interfaces
By the perception of computing machine to user's gesture operation scope, build the transparent interface with dynamic self-adapting ability, namely the random mobile gesture of user or health is allowed, transparent interface can be followed in sb's footsteps along with the change of user's hand gesture location, and ground is automatic to be adjusted, thus realizes " reality " change and information enhancement that gesture handles physical space.Except obtaining except transparent interface by the method for user learning and training, a kind of new method obtaining transparent interface is proposed herein.
If the center of gravity at current transparent interface is O k(x, y, z), the length in length three directions is respectively L (k, i),(i=1,2,3), current hand gesture location is P k(x, y, z), picture frame sequence number is k; 2DR and 3DR represents two dimension (2D) area three-dimensional (3D) region respectively, 2DR_LB and 3DR_LB represents the position, the lower left corner of 2DR and 3DR respectively, 2DR_RT and 3DR_RT represents the position, the upper right corner of 2DR and 3DR respectively.Specific algorithm is as follows:
(1) initialization, arranges k=1, L (k, i)=0, i=1,2,3, L (k, i)represent the length of transparent interface, when | H 1-H 2| during < τ, namely when staff keeps motionless substantially, τ is a non-negative empirical parameter, P k(x, y, z) =(H 1+ H 2)/2, wherein, H 1and H 2represent by the gesture centre of gravity place that Kinect device interface obtains between adjacent two frames respectively, O k(x, y, z)=P k(x, y, z), O k(x, y, z) is the centre of gravity place of k moment transparent interface;
(2) k=k+1, calculates P k(x, y, z), P kthe gesture centre of gravity place that (x, y, z) is current time k;
(3) if position of human center is moved, then position and the size of transparent interface is reorientated, that is for empirical parameter β, if:
|C k(x,y,z)-C k-1(x,y,z)|>β (1)
Then turn (1) step, C krepresent the centre of gravity place of human body;
(4) centre of gravity place at refresh transparency interface,
O k(o 1,o 2,o 3)=(nO k-1(x,y,z)+P k(x,y,z))/(n+1) (2)
Wherein, n represents in transparent interface the number of the tracing point participating in the gesture centre of gravity place added up;
(5) refresh transparency interface is along the length in length three directions:
L (k,i)=max(L (k-1,i),2*|p k(x,y,z)-o k(x,y,z)|)
(2)
Wherein, i=1,2,3;
(6) judge that current operation belongs to the operation of the 2D operational zone of physical interface or belongs to the operation of 3D operational zone;
(7) if transparent interface structure tends towards stability, namely
(||L (k,i)-L (k-1,i)||<μ)and(||O k(x,y,z)-O k-1(x,y,z)||<μ)
(4)
Then export the position of transparent interface, size and structural information, μ is the constant of setting in advance, otherwise, turn step (2).
Described step (7) comprises the steps:
(7.1) if the difference of the gesture centre of gravity place of current kth frame and the centre of gravity place of transparent interface exceedes defined threshold, then think that current operation is the operation of 3D operational zone; Otherwise current operation is identified as the operation of 2D operational zone, that is:
If
λ<||P k(x,y,z)-o k(x,y,z)||<L 3(3)
Then current gesture operation is 2D operation, otherwise current gesture operation is 3D operation, λ, L 3for constant;
(7.2) refresh 2DR and 3DR:2DR and 3DR and refer to 2 dimensional region on display and 3D region respectively,
Ask the maximal encasing box of 2DR and 3DR respectively, obtain (2DR_LB, 2DR_RT) with (3DR_LB, 3DR_RT, 2DR_LB refers to the position, the lower left corner in 2DR region, and 2DR_RT refers to the position, the upper right corner in 2DR region, and 3DR_LB, 3DR_RT refer to two diagonal positions in 3DR region and the lower left corner and position, the upper right corner respectively, thus, determine position and the size in 2DR and 3DR region.
The calculation procedure of the position of human center of described step (3) is:
(3.1) skeleton coordinate is obtained by Kinect device;
(3.2) with left and right knee joint coordinate (A 1(x, y, z), A 2(x, y, z)), left and right hip joint coordinate (A 3(x, y, z), A 4(x, y, z)), left and right shoulder joint coordinate (A 5(x, y, z), A 6(x, y, z)), left and right articulatio sternoclavicularis coordinate (A 7(x, y, z), A 8(x, y, z)) and mandibular joint (A 9(x, y, z)) mean value replace position of human center C k(x, y, z), namely
C k ( x , y , z ) = 1 9 &Sigma; i = 1 9 A i ( x , y , z )
Wherein, A i (x, y, z)represent joint position, this position directly can obtain from Kinect device DLL (dynamic link library), and it represents with (x, y, z) three-dimensional coordinate.
The calculation procedure of described step (2) gesture centre of gravity place is:
(2.1) according to the colour of skin and depth value, images of gestures is split from background image the video frame images obtained from the k moment, obtain the RGB bitmap I of gesture;
(2.2) in bitmap I, calculate the two-dimentional centre of gravity place (x, y) of images of gestures;
(2.3), in the DLL (dynamic link library) provided according to Kinect device, read the depth value z that (x, y) is corresponding, thus obtain P k(x, y, z).
The acquisition of transparent interface is equivalent to the initial phase of man-machine interactive system, once transparent interface structure tends towards stability, just can by the mapping relations of transparent interface to physical interface, just can as operation tangible interface contactless operation physical interface or carry out alternately with the equipment in environment.
Mapping between 2.2 transparent interfaces and physical interface
Generally, transparent interface VI can be two dimensional surface, and also can be 3 dimension three-dimensional surfaces, transparent interface VI be parallel with the plane at physical interface PI place.Experiment shows, can carry out expressing (Fig. 2) with perspective mapping approx from transparent interface VI to the relation of physical interface PI:
X P Y P Z P H = T X V Y V Z V 1 ()
Wherein,
T = a 11 a 12 a 13 p 1 a 21 a 22 a 23 p 2 a 31 a 32 a 33 p 3 t 1 t 2 t 3 r ()
Here, 3 × 3 rank submatrix ai, j (i, j=1,2,3) are transformation of scale or rotational transform parameter, and (t1, t2, t3) is translation transformation parameter, and (p1, p2, p3) is perspective transform parameter, and r is overall ratio conversion parameter.Obviously, matrix T should be solved from the mapping pair on a summit, 4 figure, so obtain the mapping relations from transparent interface to physical interface; Equally, the mapping relations from physical interface to transparent interface can be obtained.
2.3 windy lattice transparent interfaces
Transparent interface no doubt reflects the behavior model of user, but this model is requiring that user obtains under the condition in the face of physical interface operation task, and in the subconsciousness of user, the scope at operation place is almost parallel with physical interface.But this requirement often allows user feel tired after long-time operation, and namely the operational load of user is often heavier.In order to reduce the operational load of user, we propose the research imagination of windy lattice transparent interface further.According to this research imagination, can in arbitrarily angled between transparent interface and physical interface, and computing machine can the windy lattice transparent interface of automatic sensing the corresponding relation be automatically found between transparent interface and physical interface.Realize any conversion of multiple style, more may can reduce the operational load of user.In figure, give a kind of possible transparent interface style.In this example, the plane at physical interface PI place is vertical, and the plane at transparent interface VI place is level.In fact, according to the corresponding relation between transparent interface VI and physical interface PI, always can obtain perspective matrix T, thus realize the transparent interface of different-style easily.
In addition, transparent interface VI can also be mapped as spherically shape or cylinder shape (Fig. 3 and Fig. 4).According to the corresponding relation between VI and PI between four angle points, be easy to the relevant parameters obtaining corresponding dome shape or cylinder.
Windy lattice transparent interface reduces the degree of dependence of user for physical interface to a certain extent, overcomes interactive mode single, mechanical based on the contactless interface Chinese style of gesture input at present, more embodies the design concept of " focus be put on man ".

Claims (10)

1., based on a method for the invisible transparent interface of contactless mutual acquisition, it is characterized in that: comprise the steps:
(1) initialization, arranges k=1, L (k, i)=0, i=1,2,3, L (k, i)represent the length of transparent interface, when | H 1-H 2| during < τ, namely when staff keeps motionless substantially, τ is a non-negative empirical parameter, P k(x, y, z) =(H 1+ H 2)/2, wherein, H 1and H 2represent by the gesture centre of gravity place that Kinect device interface obtains between adjacent two frames respectively, O k(x, y, z)=P k(x, y, z), O k(x, y, z) is the centre of gravity place of k moment transparent interface;
(2) k=k+1, calculates P k(x, y, z), P kthe gesture centre of gravity place that (x, y, z) is current time k;
(3) if position of human center is moved, then position and the size of transparent interface is reorientated, that is for empirical parameter β, if:
|C k(x,y,z)-C k-1(x,y,z)|>β (1)
Then turn (1) step, C krepresent the centre of gravity place of human body;
(4) centre of gravity place at refresh transparency interface,
O k(o 1,o 2,o 3)=(nO k-1(x,y,z)+P k(x,y,z))/(n+1) (2)
Wherein, n represents in transparent interface the number of the tracing point participating in the gesture centre of gravity place added up;
(5) refresh transparency interface is along the length in length three directions:
L (k,i)=max(L (k-1,i),2*|p k(x,y,z)-o k(x,y,z)|)
(2)
Wherein, i=1,2,3;
(6) judge that current operation belongs to the operation of the 2D operational zone of physical interface or belongs to the operation of 3D operational zone;
(7) if transparent interface structure tends towards stability, namely
(||L (k,i)-L (k-1,i)||<μ)and(||O k(x,y,z)-O k-1(x,y,z)||<μ)
(4)
Then export the position of transparent interface, size and structural information, μ is the constant of setting in advance, otherwise, turn step (2).
2. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described step (7) comprises the steps:
(7.1) if the difference of the gesture centre of gravity place of current kth frame and the centre of gravity place of transparent interface exceedes defined threshold, then think that current operation is the operation of 3D operational zone; Otherwise current operation is identified as the operation of 2D operational zone, that is:
If
λ<||P k(x,y,z)-o k(x,y,z)||<L 3(3)
Then current gesture operation is 2D operation, otherwise current gesture operation is 3D operation, λ, L 3for constant;
(7.2) refresh 2DR and 3DR:2DR and 3DR and refer to 2 dimensional region on display and 3D region respectively,
Ask the maximal encasing box of 2DR and 3DR respectively, obtain (2DR_LB, 2DR_RT) with (3DR_LB, 3DR_RT, 2DR_LB refers to the position, the lower left corner in 2DR region, and 2DR_RT refers to the position, the upper right corner in 2DR region, and 3DR_LB, 3DR_RT refer to two diagonal positions in 3DR region and the lower left corner and position, the upper right corner respectively, thus, determine position and the size in 2DR and 3DR region.
3. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described transparent interface refers to the mutual induction zone of the 3D between user and physical interface of computing machine perception.
4. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described physical interface refers to the display screen of display.
5. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described transparent interface is horizontal rectangle.
6. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described transparent interface is vertical rectangle.
7. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described transparent interface is semi-cylindrical.
8. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: described physical interface is divided into 2D operational zone and 3D operational zone.
9. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: the calculation procedure of the position of human center of described step (3) is:
(3.1) skeleton coordinate is obtained by Kinect device;
(3.2) with left and right knee joint coordinate (A 1(x, y, z), A 2(x, y, z)), left and right hip joint coordinate (A 3(x, y, z), A 4(x, y, z)), left and right shoulder joint coordinate (A 5(x, y, z), A 6(x, y, z)), left and right articulatio sternoclavicularis coordinate (A 7(x, y, z), A 8(x, y, z)) and mandibular joint (A 9(x, y, z)) mean value replace position of human center C k(x, y, z), namely
C k ( x , y , z ) = 1 9 &Sigma; i = 1 9 A i ( x , y , z )
Wherein, A i (x, y, z)represent joint position, this position directly can obtain from Kinect device DLL (dynamic link library), and it represents with (x, y, z) three-dimensional coordinate.
10. the method based on the invisible transparent interface of contactless mutual acquisition according to claim 1, is characterized in that: the calculation procedure of described step (2) and (3) gesture centre of gravity place is:
(2.1) according to the colour of skin and depth value, images of gestures is split from background image the video frame images obtained from the k moment, obtain the RGB bitmap I of gesture;
(2.2) in bitmap I, calculate the two-dimentional centre of gravity place (x, y) of images of gestures;
(2.3), in the DLL (dynamic link library) provided according to Kinect device, read the depth value z that (x, y) is corresponding, thus obtain P k(x, y, z).
CN201510163408.5A 2015-04-08 2015-04-08 A kind of method based on the invisible transparent interface of contactless mutual acquisition Expired - Fee Related CN104808790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510163408.5A CN104808790B (en) 2015-04-08 2015-04-08 A kind of method based on the invisible transparent interface of contactless mutual acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510163408.5A CN104808790B (en) 2015-04-08 2015-04-08 A kind of method based on the invisible transparent interface of contactless mutual acquisition

Publications (2)

Publication Number Publication Date
CN104808790A true CN104808790A (en) 2015-07-29
CN104808790B CN104808790B (en) 2016-04-06

Family

ID=53693694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510163408.5A Expired - Fee Related CN104808790B (en) 2015-04-08 2015-04-08 A kind of method based on the invisible transparent interface of contactless mutual acquisition

Country Status (1)

Country Link
CN (1) CN104808790B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302305A (en) * 2015-11-02 2016-02-03 深圳奥比中光科技有限公司 Gesture control method and system
CN105389111A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Operation method for split-screen display and electronic device
CN105955450A (en) * 2016-04-15 2016-09-21 范长英 Natural interaction system based on computer virtual interface
CN106200964A (en) * 2016-07-06 2016-12-07 浙江大学 A kind of method carrying out man-machine interaction based on motion track identification in virtual reality
CN106502420A (en) * 2016-11-14 2017-03-15 北京视据科技有限公司 Based on the virtual key triggering method that image aberration is recognized
CN106681516A (en) * 2017-02-27 2017-05-17 盛世光影(北京)科技有限公司 Natural man-machine interaction system based on virtual reality
WO2018076523A1 (en) * 2016-10-25 2018-05-03 科世达(上海)管理有限公司 Gesture recognition method and apparatus, and in-vehicle system
WO2018149318A1 (en) * 2017-02-17 2018-08-23 阿里巴巴集团控股有限公司 Input method, device, apparatus, system, and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN102830938A (en) * 2012-09-13 2012-12-19 济南大学 3D (three-dimensional) human-computer interaction method based on gesture and animation
CN103793060A (en) * 2014-02-14 2014-05-14 杨智 User interaction system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN102830938A (en) * 2012-09-13 2012-12-19 济南大学 3D (three-dimensional) human-computer interaction method based on gesture and animation
CN103793060A (en) * 2014-02-14 2014-05-14 杨智 User interaction system and method

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389111A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Operation method for split-screen display and electronic device
CN105389111B (en) * 2015-10-28 2019-05-17 维沃移动通信有限公司 A kind of operating method and electronic equipment of split screen display available
CN105302305A (en) * 2015-11-02 2016-02-03 深圳奥比中光科技有限公司 Gesture control method and system
CN105955450A (en) * 2016-04-15 2016-09-21 范长英 Natural interaction system based on computer virtual interface
CN106200964A (en) * 2016-07-06 2016-12-07 浙江大学 A kind of method carrying out man-machine interaction based on motion track identification in virtual reality
CN106200964B (en) * 2016-07-06 2018-10-26 浙江大学 The method for carrying out human-computer interaction is identified in a kind of virtual reality based on motion track
WO2018076523A1 (en) * 2016-10-25 2018-05-03 科世达(上海)管理有限公司 Gesture recognition method and apparatus, and in-vehicle system
CN106502420A (en) * 2016-11-14 2017-03-15 北京视据科技有限公司 Based on the virtual key triggering method that image aberration is recognized
WO2018149318A1 (en) * 2017-02-17 2018-08-23 阿里巴巴集团控股有限公司 Input method, device, apparatus, system, and computer storage medium
CN108459782A (en) * 2017-02-17 2018-08-28 阿里巴巴集团控股有限公司 A kind of input method, device, equipment, system and computer storage media
CN106681516A (en) * 2017-02-27 2017-05-17 盛世光影(北京)科技有限公司 Natural man-machine interaction system based on virtual reality
CN106681516B (en) * 2017-02-27 2024-02-06 盛世光影(北京)科技有限公司 Natural man-machine interaction system based on virtual reality

Also Published As

Publication number Publication date
CN104808790B (en) 2016-04-06

Similar Documents

Publication Publication Date Title
CN104808790B (en) A kind of method based on the invisible transparent interface of contactless mutual acquisition
CN103246351B (en) A kind of user interactive system and method
CN103793060B (en) A kind of user interactive system and method
CN104571823A (en) Non-contact virtual human-computer interaction method based on smart television set
CN110168475A (en) User&#39;s interface device is imported into virtual reality/augmented reality system
CN103049852A (en) Virtual fitting system
CN106066688B (en) A kind of virtual reality exchange method and device based on wearable gloves
RU2667720C1 (en) Method of imitation modeling and controlling virtual sphere in mobile device
CN104951073B (en) A kind of gesture interaction method based on virtual interface
CN104714649A (en) Kinect-based naked-eye 3D UI interaction method
Sra et al. Metaspace ii: Object and full-body tracking for interaction and navigation in social vr
CN104598035B (en) Cursor display method, smart machine and the system shown based on 3D stereo-pictures
Gao et al. Digiclay: an interactive installation for virtual pottery using motion sensing technology
Chen et al. Graphic design method based on 3D virtual vision technology
Wang et al. A survey of museum applied research based on mobile augmented reality
CN105929946B (en) A kind of natural interactive method based on virtual interface
Yolcu et al. Real time virtual mirror using kinect
CN109903118A (en) A kind of house ornamentation schemes show system based on two-screen Interactive
GB2533777A (en) Coherent touchless interaction with steroscopic 3D images
Phommaly et al. Research of Virtual reality technology in home design
CN207367158U (en) A kind of operating platform being combined based on LeapMotion and Arduino
CN105955450A (en) Natural interaction system based on computer virtual interface
Geng et al. Design practice of interactive imaging art in the new media art-Taking “Ink-wash Tai Chi” as an example
Lim et al. Holographic projection system with 3D spatial interaction
Shen Augmented reality for e-commerce

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160406

Termination date: 20200408