Embodiment
By describing technology contents of the present invention, structural attitude in detail, being realized object and effect, below in conjunction with embodiment and coordinate accompanying drawing to be explained in detail.
Refer to Fig. 1, for the hardware structure schematic diagram of the virtual touch-control system of man-machine interactive in embodiment of the present invention, this system 100 comprises the virtual contactor control device of man-machine interactive 10, two picture pick-up devices 20 and display devices 21, for the detection of user's gesture being realized to touch-control input.
Please refer to Fig. 2, for the high-level schematic functional block diagram of the virtual contactor control device of man-machine interactive in embodiment of the present invention, this device 10 comprises that ambient brightness sensing unit 101, view recognition unit 102, finger number judging unit 103, mode of operation judging unit 104, surface level two-dimensional coordinate are set up unit 105, vertical plane two-dimensional coordinate is set up unit 106, three-dimensional coordinate computing unit 107, action judging unit 108, graphic plotting unit 109, indicative control unit 110 and display unit 111.During this device 10 can be applied electronic equipments such as camera, mobile phone, panel computer, this picture pick-up device 20 communicates by network and this device 10 and is connected, and the transmission medium of this network can be the wireless transmission mediums such as bluetooth, zigbee, WIFI.
Each picture pick-up device 20 includes the first camera 201 and second camera 202, respectively as longitudinally picture pick-up device and laterally picture pick-up device.Wherein, can be that intelligent glasses etc. can, in the mobile portable electronic equipment of user's hand top, can be positioned over for intelligent bracelet etc. the mobile portable electronic equipment in user front as the second camera 202 of horizontal picture pick-up device as the first camera 201 of longitudinal picture pick-up device.Further, the first camera 201 and the second camera 202 of each picture pick-up device 20 are respectively common camera and infrared camera.Wherein, common camera can, in the good situation of light condition, carry out image acquisition and be sent to device 10 and analyze user's operational motion.Infrared camera can, in the situation that light condition is poor, carries out image acquisition and be sent to device 10 and analyze user's operational motion.This view recognition unit 102 comprises longitudinal view recognin unit 1021 and transverse views recognin unit 1022, corresponding the first camera 201 and second camera 202 as longitudinal picture pick-up device and horizontal picture pick-up device arranges respectively, for the image of its collection is carried out to identifying processing.
In the time of original state, two pairs of cameras (a pair of common camera and a pair of infrared camera) are used in conjunction with, and shooting direction is set to orthogonal, can catch the action behavior of hand vertical direction and horizontal direction simultaneously.Conventionally, in intelligent glasses, two cameras (a common camera and an infrared camera) are put down, and two cameras (a common camera and an infrared camera) level on intelligent bracelet or smart mobile phone is put.And, jointly form picture catching region by the rectangular area of taking of two pairs of cameras.
The brightness value of these ambient brightness sensing unit 101 induced environments, and ambient brightness value is sent in this view recognition unit 102.This view recognition unit 102 uses common camera or infrared camera according to the luminance threshold value judgement setting in advance.For example, brightness impression scope is 1~100, and threshold value is 50, and ambient brightness value exceedes at 50 o'clock and determines and uses common camera, when ambient brightness value is used infrared camera image lower than 50 time.
Determine according to ambient brightness value after the camera types using, start initial alignment operation, specific as follows.This device 10 is in the time carrying out initial alignment operation, user holds of needing operation in both hands that fist is unsettled lies against the position that two groups of selected cameras can photograph,, picture catching region, and keep the static of certain hour, with completing user hand position initialization flow process, be convenient to device 10 and identify and orient the initial position of hand, so that follow-up operation.This device 10 is identified and the principle of locating hand position will below be described in detail.
In the time carrying out interactive operation, user will need unsettled the lying against in picture catching region of a hand (hereinafter to be referred as one hand) of operation in both hands, the ambient brightness value judgement that this longitudinal view recognin unit 1021 detects according to this ambient brightness sensing unit 101 is used common camera or infrared camera, and when after definite camera using, the longitudinal common camera of picture pick-up device of conduct above one hand or the view data of infrared camera collection being carried out to hard recognition, to determine the position of hand center in image.The ambient brightness value judgement that this transverse views recognin unit 1022 detects according to this ambient brightness sensing unit 101 is used common camera or infrared camera, and when determining after the camera using to carrying out hard recognition in singlehanded front as the common camera of horizontal picture pick-up device or the view data of infrared camera collection, to determine the position of hand center in image.
Wherein, the position of definite hand center, this longitudinal view recognin unit 1021 in image is hand central point pixel at the XZ coordinate surface position in image, and for example, hand central point pixel is positioned at the capable b row of a of XZ face image.The position of definite hand center, this transverse views recognin unit 1022 in image is hand central point pixel at the YZ coordinate surface position in image.
Further, judge that by common camera the method for hand central point comprises color background method and color glove method.Wherein, color background method is specially: the environmental background of bimanualness needs color relatively simple and single, can directly pass through so the direct handle portion Extraction of Image of color interval range of human body complexion out, then obtain the line number of central point according to the mean value of peak in the hand images region extracting and minimum point, obtain the columns of central point by the mean value of the most left point and the rightest point.Color glove auxiliary law is specially: user wears special pure red gloves, because common camera is all RGB (red-green-blue) sampling, can directly extract pure red regional location, also can use green or blue as finger of glove end points color.Then, obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.
The method that judges finger central point by infrared camera comprises temperature filtering method and color glove auxiliary law.Wherein, temperature filtering method is in particular: bimanualness can be directly by the higher feature of the relative environment temperature of human surface temperature directly hand Extraction of Image higher temperature out, then obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.Color glove auxiliary law is specially: user wears special gloves, there is heating effect on the surface of gloves, can directly extract like this thermal region in image, then obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.
This surface level two-dimensional coordinate is set up the position of hand center position in image and the pixel resolution of camera that unit 105 identifies according to this longitudinal view recognin unit 1021, hand central point location of pixels is converted to the two-dimensional coordinate value of XZ coordinate surface.This vertical plane two-dimensional coordinate is set up the position of hand center position in image and the pixel resolution of camera that unit 106 identifies according to this transverse views recognin unit 1022, hand central point location of pixels is converted to the two-dimensional coordinate value of YZ coordinate surface.
Refer to Fig. 3, the transfer principle that hand central point location of pixels is converted to the two-dimensional coordinate value of XZ coordinate surface is in particular: image lower left corner pixel is set to the starting point 0 of two-dimensional coordinate system, goes out the line number of the relative each image of coordinate figure scope and the ratio of columns according to image analytic degree with the coordinate figure range computation being converted to after two-dimensional coordinate.For example, the wide height of XZ coordinate surface image analytic degree is 2000*1000, the coordinate figure scope of two dimension XZ plane coordinate system is that X-axis is 1 to 150, Z axis is 1 to 100, and the column number proportion of the relative image of Z axis coordinate figure scope is 100/1000, the columns ratio 150/2000 of the relative image of X-axis coordinate figure scope.The location of pixels of hand central point is multiplied by the ratio of the relative image line of the coordinate range calculating, columns, thereby obtains being converted to the end points two-dimensional coordinate value after two-dimensional coordinate.For example, the location of pixels of certain hand central point is that 300 row 200 are listed as, and the Z axis coordinate of this hand central point is 300*100/1000=30, and the X-axis coordinate of this hand central point is 200*150/2000=15.The transfer principle of two-dimensional coordinate value that hand central point location of pixels is converted to YZ coordinate surface is the same, does not add and repeats at this.
This three-dimensional coordinate computing unit 107 is set up unit 105 and vertical plane two-dimensional coordinate according to this surface level two-dimensional coordinate and is set up the hand central point location of pixels of determining respectively unit 106 and set up the coordinate figure of hand central point in XYZ three-dimensional system of coordinate in the two-dimensional coordinate value of XZ coordinate surface and YZ coordinate surface.
Wherein, the principle of work of setting up the coordinate figure of hand central point in XYZ three-dimensional system of coordinate is specially: because XZ coordinate surface and YZ coordinate surface have common Z axis, so the Z value of each coordinate end points in the Z value of each coordinate end points in XZ coordinate surface and YZ coordinate surface is extracted and is compared, consistent or the immediate coordinate end points of Z axis coordinate figure can be considered to same end points, then the coordinate figure of XZ coordinate surface and the coordinate figure of YZ coordinate surface that are judged as same end points are merged into a coordinate end points, using the coordinate figure as XYZ three-dimensional system of coordinate.Because Z value is likely different, so the coordinate Z value that the coordinate Z value that the Z value of the new three-dimensional coordinate producing is XZ coordinate surface adds YZ coordinate surface is then divided by 2 operation result, the X in three-dimensional system of coordinate, Y coordinate figure equal respectively X coordinate figure and the Y coordinate figure of XZ coordinate surface and YZ coordinate surface.
This longitudinal view recognin unit 1021 is sent to hand center position this surface level two-dimensional coordinate and sets up in unit 105 and also the hand images identifying is sent to this finger number judging unit 103 in the position in image.
This finger number judging unit 103 is identified user in longitudinal view according to hand images and is operated used finger number.
Particularly, this finger number judging unit 103 is by determining finger number to the identification of pointing end points in hand images.Wherein, method by common camera identification finger end points comprises color background method and color glove auxiliary law, color background method is specially: the environmental background of bimanualness needs color relatively simple and single, can directly pass through so the direct handle portion Extraction of Image of color interval range of human body complexion out, then calculate according to figure endpoint algorithm the cut off position that the each strip of hand extends, as the endpoint location of every finger, then calculate total several end points.Color glove auxiliary law is specially: user need to wear special gloves, each fingertip location of gloves is pure red, because common camera is all RGB (red-green-blue) sampling, can directly extract the position of pure red point, also can use green or blue as finger of glove end points color, then calculate total several end points.
The method of identifying finger end points by infrared camera comprises temperature filtering method and color glove auxiliary law, temperature filtering method is specially: bimanualness can be directly by the higher feature of the relative environment temperature of human surface temperature directly hand Extraction of Image higher temperature out, then calculate according to figure endpoint algorithm the cut off position that the each strip of hand extends, as the endpoint location of every finger, then calculate total several end points.Color glove auxiliary law is in particular: user wears special gloves, and each fingertip location of gloves is the points that have heating, can directly extract like this hotspot location in image, then calculates total several end points.
This mode of operation judging unit 104 uses the number of finger to carry out the selection of mode of operation according to this finger number judging unit 103 definite users, for example, when finger does not stretch out (hand is in holding bulk), to for a change position coordinates of hand in screen of the operator scheme of virtual touch screen; While only using a finger (other are pointed in holding bulk), to the operator scheme of virtual touch screen for choosing certain icon; While using two fingers, be to drag the icon of choosing to the operator scheme of virtual touch screen; While using three fingers, be the whole screen that slides to the operator scheme of virtual touch screen.0 finger to 5 finger of definable at most, 6 kinds of mode of operations altogether, are used mode of operation corresponding to different finger all can define again.
What the mode of operation that hand D coordinates value, this mode of operation judging unit 104 that this action judging unit 108 is set up according to this three-dimensional coordinate computing unit 107 are definite and this graphic plotting unit 109 fed back can handle icon XZ coordinate range region, judge user's operating position and operator scheme.
In the present embodiment, in the time of hand position initial phase, this action judging unit 108 is judged plane Y-axis value according to the minimum end points of vertical direction in the D coordinates value of hand central point (the namely min coordinates value of Y-axis) as clicking, the D coordinates value of handle portion central point is mapped on operable area, and judges that in conjunction with current mode of operation the current hand carrying out is through the corresponding operating result of this click judgement plane.For example, under the mode of operation of the position coordinates at 0 change hand corresponding to finger in screen, clicking operation does not have any implication; 1 corresponding choosing under certain icon working pattern of finger, clicking operation represents to choose the icon in certain screen; 2 corresponding dragging under the icon working pattern of choosing of finger, click action represents that user starts or finish drag operation by certain icon; Point under the corresponding whole screen work pattern of slip at 3, click action represents operation that user starts whole screen-picture or finish to drag etc.
Owing to being hand position initial phase, this action judging unit 108 utilizes Y value to set the initial value of the judgement face of click, so the Y value of hand coordinate is all more than or equal to the decision content of the judgement face of clicking.And, in the time that user moves hand and carries out the operation under normal mode of operation, each this action judging unit 108 receives after the three-dimensional coordinate of hand central point, no longer reset this click and judge plane Y-axis value, but directly judge that according to this click plane Y-axis value judges whether that effective click action appears in virtual touch screen.
Wherein, the D coordinates value of hand central point is mapped on the operable area of touch screen, be in particular: set the coordinate figure scope that this operable area scope is XZ coordinate surface, the coordinate of the XZ face of hand central point can directly be mapped as the planimetric position coordinate in operable area.
Judge click action according to the D coordinates value of hand central point, be specially: judge after plane Y-axis value when having selected to click, as long as the Y value in hand central point three-dimensional coordinate is judged plane Y-axis value lower than this click, judge that this end points passes through this click judge plane, there is click behavior in this finger, then which region decision user, which position carried out to clicking operation in conjunction with hand central point.
The position of hand and the picture of valid function on virtual touch screen, according to the judged result of this action judging unit 108 and current all coordinates that can handle icon, are drawn out in this graphic plotting unit 109.The coordinate figure of all XZ coordinate surfaces that can handle icon of this graphic plotting unit 109 initialization at the beginning, and the coordinates regional of each XZ coordinate surface that can handle icon is fed back to this action judging unit 108.This graphic plotting unit 109 can handle icon according to different operation changes position, make specific response according to the band of position of the click behavior occurring, for example, highlightedly choose, drag, deletion etc., and drawn image is sent to this indicative control unit 110, upgrade simultaneously behind shift position can handle icon coordinate figure and feed back to this action judging unit 108.
This indicative control unit 110 is converted to the image of being drawn by this graphic plotting unit 109 sequential that display device 21 can show, calling this display unit 111 is shown to operated image on virtual touch screen on display device 21 and watches for user, user can learn the position in the virtual touch screen that own current hand central point is corresponding according to feedback, then can start to continue mobile hand according to displaying contents and proceed virtual touch control operation.
Referring to Fig. 4, is the schematic flow sheet of the virtual touch control method of man-machine interactive in embodiment of the present invention, and the method comprises:
Step S30, the brightness value of these ambient brightness sensing unit 101 induced environments, the ambient brightness value judgement that the luminance threshold value that these view recognition unit 102 bases set in advance and this ambient brightness sensing unit 101 sense is used common camera or infrared camera.
In the time of original state, two pairs of cameras (a pair of common camera and a pair of infrared camera) are used in conjunction with, and shooting direction is set to orthogonal, can catch the action behavior of hand vertical direction and horizontal direction simultaneously.Conventionally, in intelligent glasses, two cameras (a common camera and an infrared camera) are put down, and two cameras (a common camera and an infrared camera) level on intelligent bracelet or smart mobile phone is put.And, jointly form picture catching region by the rectangular area of taking of two pairs of cameras.
Step S31, user holds by of needing operation in both hands that fist is unsettled to be lain against in picture catching region and keep the static of certain hour, by device 10 identifications with orient the initial position of hand, the initialization of completing user hand position.
This device 10 is identified and the principle of locating hand position will below be described in detail.
Step S32, user will need unsettled the lying against in picture catching region of a hand (hereinafter to be referred as one hand) of operation in both hands, this longitudinal view recognin unit 1021 is according to carrying out hard recognition in singlehanded top as longitudinal common camera of picture pick-up device or the view data of infrared camera collection, to determine the position of hand center in image.This transverse views recognin unit 1022 is according to carrying out hard recognition in singlehanded front as the view data of the common camera in horizontal picture pick-up device or infrared camera collection, to determine the position of hand center in image.
Particularly, the position of definite hand center, this longitudinal view recognin unit 1021 in image is hand central point pixel at the XZ coordinate surface position in image, and for example, hand central point pixel is positioned at the capable b row of a of XZ face image.The position of definite hand center, this transverse views recognin unit 1022 in image is hand central point pixel at the YZ coordinate surface position in image.
Further, judge that by common camera the method for hand central point comprises color background method and color glove method.Wherein, color background method is specially: the environmental background of bimanualness needs color relatively simple and single, can directly pass through so the direct handle portion Extraction of Image of color interval range of human body complexion out, then obtain the line number of central point according to the mean value of peak in the hand images region extracting and minimum point, obtain the columns of central point by the mean value of the most left point and the rightest point.Color glove auxiliary law is specially: user wears special pure red gloves, because common camera is all RGB (red-green-blue) sampling, can directly extract pure red regional location, also can use green or blue as finger of glove end points color.Then, obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.
The method that judges finger central point by infrared camera comprises temperature filtering method and color glove auxiliary law.Wherein, temperature filtering method is in particular: bimanualness can be directly by the higher feature of the relative environment temperature of human surface temperature directly hand Extraction of Image higher temperature out, then obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.Color glove auxiliary law is specially: user wears special gloves, there is heating effect on the surface of gloves, can directly extract like this thermal region in image, then obtain the line number of central point according to the mean value of highs and lows in the hand images region extracting, obtain the columns of central point by the mean value of the most left point and the rightest point.
Step S33, this surface level two-dimensional coordinate is set up the position of hand center position in image and the pixel resolution of camera that unit 105 identifies according to this longitudinal view recognin unit 1021, hand central point location of pixels is converted to the two-dimensional coordinate value of XZ coordinate surface.This vertical plane two-dimensional coordinate is set up the position of hand center position in image and the pixel resolution of camera that unit 106 identifies according to this transverse views recognin unit 1022, hand central point location of pixels is converted to the two-dimensional coordinate value of YZ coordinate surface.
Wherein, the transfer principle that hand central point location of pixels is converted to the two-dimensional coordinate value of XZ coordinate surface is in particular: image lower left corner pixel is set to the starting point 0 of two-dimensional coordinate system, goes out the line number of the relative each image of coordinate figure scope and the ratio of columns according to image analytic degree with the coordinate figure range computation being converted to after two-dimensional coordinate.For example, the wide height of XZ coordinate surface image analytic degree is 2000*1000, the coordinate figure scope of two dimension XZ plane coordinate system is that X-axis is 1 to 150, Z axis is 1 to 100, and the column number proportion of the relative image of Z axis coordinate figure scope is 100/1000, the columns ratio 150/2000 of the relative image of X-axis coordinate figure scope.The location of pixels of hand central point is multiplied by the ratio of the relative image line of the coordinate range calculating, columns, thereby obtains being converted to the end points two-dimensional coordinate value after two-dimensional coordinate.For example, the location of pixels of certain hand central point is that 300 row 200 are listed as, and the Z axis coordinate of this hand central point is 300*100/1000=30, and the X-axis coordinate of this hand central point is 200*150/2000=15.The transfer principle of two-dimensional coordinate value that hand central point location of pixels is converted to YZ coordinate surface is the same, does not add and repeats at this.
Step S34, this three-dimensional coordinate computing unit 107 is set up unit 105 and vertical plane two-dimensional coordinate according to this surface level two-dimensional coordinate and is set up the hand central point location of pixels of determining respectively unit 106 and set up the coordinate figure of hand central point in XYZ three-dimensional system of coordinate in the two-dimensional coordinate value of XZ coordinate surface and YZ coordinate surface.
Wherein, the method of setting up the coordinate figure of hand central point in XYZ three-dimensional system of coordinate is specially: because XZ coordinate surface and YZ coordinate surface have common Z axis, so the Z value of each coordinate end points in the Z value of each coordinate end points in XZ coordinate surface and YZ coordinate surface is extracted and is compared, consistent or the immediate coordinate end points of Z axis coordinate figure can be considered to same end points, then the coordinate figure of XZ coordinate surface and the coordinate figure of YZ coordinate surface that are judged as same end points are merged into a coordinate end points, using the coordinate figure as XYZ three-dimensional system of coordinate.Because Z value is likely different, so the coordinate Z value that the coordinate Z value that the Z value of the new three-dimensional coordinate producing is XZ coordinate surface adds YZ coordinate surface is then divided by 2 operation result, the X in three-dimensional system of coordinate, Y coordinate figure equal respectively X coordinate figure and the Y coordinate figure of XZ coordinate surface and YZ coordinate surface.
Step S35, this finger number judging unit 103 is identified user in longitudinal view according to hand images and is operated used finger number.
Particularly, this finger number judging unit 103 is by determining finger number to the identification of pointing end points in hand images.Wherein, method by common camera identification finger end points comprises color background method and color glove auxiliary law, color background method is specially: the environmental background of bimanualness needs color relatively simple and single, can directly pass through so the direct handle portion Extraction of Image of color interval range of human body complexion out, then calculate according to figure endpoint algorithm the cut off position that the each strip of hand extends, as the endpoint location of every finger, then calculate total several end points.Color glove auxiliary law is specially: user need to wear special gloves, each fingertip location of gloves is pure red, because common camera is all RGB (red-green-blue) sampling, can directly extract the position of pure red point, also can use green or blue as finger of glove end points color, then calculate total several end points.
The method of identifying finger end points by infrared camera comprises temperature filtering method and color glove auxiliary law, temperature filtering method is specially: bimanualness can be directly by the higher feature of the relative environment temperature of human surface temperature directly hand Extraction of Image higher temperature out, then calculate according to figure endpoint algorithm the cut off position that the each strip of hand extends, as the endpoint location of every finger, then calculate total several end points.Color glove auxiliary law is in particular: user wears special gloves, and each fingertip location of gloves is the points that have heating, can directly extract like this hotspot location in image, then calculates total several end points.
Step S36, this mode of operation judging unit 104 uses the number of finger to carry out the selection of mode of operation according to this finger number judging unit 103 definite users.
For example,, when finger does not stretch out (hand is in holding bulk), to for a change position coordinates of hand in screen of the operator scheme of virtual touch screen; While only using a finger (other are pointed in holding bulk), to the operator scheme of virtual touch screen for choosing certain icon; While using two fingers, be to drag the icon of choosing to the operator scheme of virtual touch screen; While using three fingers, be the whole screen that slides to the operator scheme of virtual touch screen.0 finger to 5 finger of definable at most, 6 kinds of mode of operations altogether, are used mode of operation corresponding to different finger all can define again.
Step S37, what the mode of operation that hand D coordinates value, this mode of operation judging unit 104 that this action judging unit 108 is set up according to this three-dimensional coordinate computing unit 107 are definite and this graphic plotting unit 109 fed back can handle icon XZ coordinate range region, judge user's operating position and operator scheme.
In the present embodiment, in the time of hand position initial phase, this action judging unit 108 is judged plane Y-axis value according to the minimum end points of vertical direction in the D coordinates value of hand central point (the namely min coordinates value of Y-axis) as clicking, the D coordinates value of handle portion central point is mapped on operable area, and judges that in conjunction with current mode of operation the current hand carrying out is through the corresponding operating result of this click judgement plane.For example, under the mode of operation of the position coordinates at 0 change hand corresponding to finger in screen, clicking operation does not have any implication; 1 corresponding choosing under certain icon working pattern of finger, clicking operation represents to choose the icon in certain screen; 2 corresponding dragging under the icon working pattern of choosing of finger, click action represents that user starts or finish drag operation by certain icon; Point under the corresponding whole screen work pattern of slip at 3, click action represents operation that user starts whole screen-picture or finish to drag etc.
Owing to being hand position initial phase, this action judging unit 108 utilizes Y value to set the initial value of the judgement face of click, so the Y value of hand coordinate is all more than or equal to the decision content of the judgement face of clicking.And, in the time that user moves hand and carries out the operation under normal mode of operation, each this action judging unit 108 receives after the three-dimensional coordinate of hand central point, no longer reset this click and judge plane Y-axis value, but directly judge that according to this click plane Y-axis value judges whether that effective click action appears in virtual touch screen.
Wherein, the method D coordinates value of hand central point being mapped on the operable area of touch screen is: set the coordinate figure scope that this operable area scope is XZ coordinate surface, the coordinate of the XZ face of hand central point can directly be mapped as the planimetric position coordinate in operable area.
The method that judges click action according to the D coordinates value of hand central point is: judge after plane Y-axis value when having selected to click, as long as the Y value in hand central point three-dimensional coordinate is judged plane Y-axis value lower than this click, judge that this end points passes through this click judge plane, there is click behavior in this finger, then which region decision user, which position carried out to clicking operation in conjunction with hand central point.
Step S38, the position of hand and the picture of valid function on virtual touch screen, according to the judged result of this action judging unit 108 and current all coordinates that can handle icon, are drawn out in this graphic plotting unit 109.
Wherein, the coordinate figure of all XZ coordinate surfaces that can handle icon of this graphic plotting unit 109 initialization at the beginning, and the coordinates regional of each XZ coordinate surface that can handle icon is fed back to this action judging unit 108.This graphic plotting unit 109 can handle icon according to different operation changes position, make specific response according to the band of position of the click behavior occurring, for example, highlightedly choose, drag, deletion etc., and drawn image is sent to this indicative control unit 110, upgrade simultaneously behind shift position can handle icon coordinate figure and feed back to this action judging unit 108.
Step S39, this indicative control unit 110 is converted to the image of being drawn by this graphic plotting unit 109 sequential that display device 21 can show, calling this display unit 111 is shown to operated image on virtual touch screen on display device 21 and watches for user, user can learn the position in the virtual touch screen that own current hand central point is corresponding according to feedback, then can start to continue mobile hand according to displaying contents and proceed virtual touch control operation.
The virtual contactor control device of a kind of man-machine interactive provided by the invention, system and method, capture image and determine operating position and point number by identification by image recognition hand by camera and judge mode of operation, directly be mapped as the operational motion to virtual touch screen by the hand three-dimensional coordinate obtaining, and on display, show and feed back to user, can make touch screen input no longer need entity device, by intelligent glasses and intelligent bracelet, or the virtual touch screen input environment of picture pick-up device fast construction on intelligent and portable mobile device, carry out touch screen input whenever and wherever possible, facilitate user to carry out man-machine interactive operation freely by virtual touch screen whenever and wherever possible.
The foregoing is only embodiments of the invention; not thereby limit the scope of the claims of the present invention; every equivalent structure or conversion of equivalent flow process that utilizes instructions of the present invention and accompanying drawing content to do; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.