CN101751118B - Object end positioning method and applied system thereof - Google Patents

Object end positioning method and applied system thereof Download PDF

Info

Publication number
CN101751118B
CN101751118B CN2008101856396A CN200810185639A CN101751118B CN 101751118 B CN101751118 B CN 101751118B CN 2008101856396 A CN2008101856396 A CN 2008101856396A CN 200810185639 A CN200810185639 A CN 200810185639A CN 101751118 B CN101751118 B CN 101751118B
Authority
CN
China
Prior art keywords
points
those selected
end points
raw video
concave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008101856396A
Other languages
Chinese (zh)
Other versions
CN101751118A (en
Inventor
王科翔
陈柏戎
李家昶
郭建春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN2008101856396A priority Critical patent/CN101751118B/en
Publication of CN101751118A publication Critical patent/CN101751118A/en
Application granted granted Critical
Publication of CN101751118B publication Critical patent/CN101751118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to an object end positioning method, which is used for positioning the end position of two members of an object. A foreground image is obtained by proceeding foreground treatment of an obtained original image. The foreground image corresponds to the outline of the object in the original image. A plurality of turning points are obtained according to the foreground image and are linked to form a polygonal curve which is similar to the outline of the object. The turning points are classified into convex points or concave points according to an angle of the turning point, and a plurality of convex points to be selected and a plurality of concave points to be selected are selected. Two of convex points to be selected are selected to be used as two temporary end points. The two temporary end points are linked with a concave point to be selected which is arranged between the two temporary end points to form a triangle corresponding to the two members of the object in the original image. Two positioning end points are determined according to the two temporary end points so as to position the end position of the two members of the object.

Description

The system of object end positioning method and application
Technical field
The invention relates to the system of a kind of object end positioning method and application, and particularly relevant for a kind of object end positioning method of terminal position of two limbs in order to positioning object and the system of application.
Background technology
Human-computer interaction interface is that a kind of people and computing machine of letting carries out interactive interface.Generally speaking, human-computer interaction interface generally includes keyboard, mouse or touch panel.In this kind interface, it is called and be the user.The user just can reach control computer or the purpose interactive with computing machine through human-computer interaction interface.
At United States Patent (USP) the 5th; 524; In No. 637 disclosed patents, proposing a kind of interaction systems (" interactive system for measuring physiologicalexertion ") that measures the physiology running, is that an acceleration transducer (accelerometer) is worn on user's both feet; And make the user stand on the feeling of stress drafting board (pressure sensor), with the application of force size of judging user both feet, with the translational acceleration of both feet.So, the user just can come to carry out interaction with interaction systems through the application of force and the translational speed of both feet.
Moreover, in recent years, also there is the researchist to utilize image processing technique, increase interactive between user and the computing machine.At United States Patent (USP) the 6th; 308; In No. 565 disclosed patents; A kind of system and method (" system and methodfor tracking and assessing movement skills in multidimensionalspace ") of in hyperspace, following the trail of and estimate motor skill is proposed; Be will be initiatively or the sign (marker) of passive type stick on user's the both feet, and utilize the mode of image processing to detect the amount of movement of sign, judge user's coordinate position and the mobile status of both feet in the space.So, the user just can come to carry out interaction with interaction systems through the move mode of both feet.
Though there has been multiple human-computer interaction interface to be suggested; Yet; In above-mentioned embodiment, if will let human-computer interaction interface can judge the terminal position of user's both feet, then must make the user wear special device or clothes (as above-mentioned acceleration transducer, and sign).So, will cause user's inconvenience, and also may reduce user's use wish.Therefore, how to locate the terminal position of user's both feet, and can not cause user's inconvenience, one of problem of still endeavouring for industry.
Summary of the invention
The object of the present invention is to provide the system of a kind of object end positioning method and application, can locate the terminal position of two limbs of an object.Two limbs of this object can be the both feet or two fingers of human body.Through this object end positioning method and use it and put, the user need not wear special device or clothes.So, can improve user's ease of use.
For realizing above-mentioned purpose, according to a first aspect of the invention, a kind of object end positioning method is proposed, in order to the terminal position of two limbs of locating an object.The method comprises the following steps.At first, obtain a raw video, raw video has the image information corresponding to object.Then, raw video is carried out perspective process to obtain a prospect image.The prospect image corresponds to the profile of object.Then, obtain a plurality of turning points according to the prospect image, turning point online form with raw video in the close in fact polygon curve of profile of object.Afterwards, according to each turning point and corresponding two adjacent formed angles of turning point, determine a plurality of salient points and belong to a concave point with a little turning points from then on, and select a plurality of selected salient points and a plurality of selected concave point along a predetermined direction.Come, its that select these a little selected salient points be two as two tentative end points again, the corresponding triangle of two limbs of the object in the two tentative end points of being selected and the online formation of a selected concave point between two tentative end points and the raw video.At last, determine two location end points, come the terminal position of two limbs of positioning object according to two tentative end points.
According to a second aspect of the invention, a kind of object end positioning system is proposed, in order to the terminal position of two limbs of locating an object.This system comprises an acquisition unit, a processing unit, a matching unit, reaches locating unit.Acquisition unit is in order to obtain a raw video, and this raw video has the image information corresponding to object.Processing unit is in order to carry out perspective process to obtain a prospect image to this raw video, the prospect image corresponds to the profile of object.Processing unit is in addition in order to obtaining a plurality of turning points according to the prospect image, this a little turning points online form with raw video in the close in fact polygon curve of profile of object.Processing unit is in order to according to each turning point and corresponding two adjacent formed angles of turning point, determines a plurality of salient points and a plurality of concave point with a little turning points from then on, and selects a plurality of selected salient points and a plurality of selected concave point along a predetermined direction.Matching unit in order to select these a little selected salient points its two as two tentative end points, the two tentative end points of being selected and the online system of a selected concave point between two tentative end points form with raw video in the corresponding triangle of two limbs of object.Positioning unit comes the terminal position of two limbs of positioning object in order to determine two location end points according to two tentative end points.
Description of drawings
Fig. 1 illustrates the process flow diagram into the object end positioning method of one embodiment of the invention.
Fig. 2 illustrates the calcspar into the object end positioning system of the object end positioning method of application drawing l.
Fig. 3~7 illustrate an example of the multiple image that when carrying out object end positioning method, is produced for the object end positioning system respectively.
Primary clustering symbol description in the accompanying drawing
200 object end positioning systems; 210 acquisition units; 220 processing units; 230 matching units; 240 positioning units; 250 tracing units; The selected salient point of a1~a4; B1, the selected concave point of b2; C1~cn turning point; The D1 predetermined direction; The F2 profile; F3 polygon curve; The Im1 raw video; Im2 prospect image; Px, Py locate end points; S110~S160 process step; T1, the tentative end points of t2; T1 ', t2 ' follow the trail of end points.
Embodiment
For letting the foregoing of the present invention can be more obviously understandable, hereinafter is special lifts preferred embodiment, and conjunction with figs. elaborates.
Please with reference to Fig. 1, it illustrates the process flow diagram into the object end positioning method of one embodiment of the invention.The method is in order to the terminal position of two limbs of locating an object.The method comprises the following steps.
At first, shown in step S110, obtain a raw video, it has the image information corresponding to this object.Then, shown in step S120, this raw video is carried out perspective process, to obtain a prospect image.This prospect image correspondence is the profile of object so far.
Then, in step S130, obtain a plurality of turning points according to this prospect image.This a little turning points online forms the close in fact polygon curve of profile of this object in the raw video therewith.Afterwards, in step S140,, determine a plurality of salient points and a plurality of concave point with a little turning points from then on, and select a plurality of selected salient points and a plurality of selected concave point along a predetermined direction according to each this turning point and corresponding two adjacent formed angles of turning point.
Then, in step S150, its that select these a little selected salient points is two as two tentative end points.The online formation of this two tentative end points selected and a selected concave point between this two tentative end points is the corresponding triangle of these two limbs of this object in the raw video therewith.At last, in step S160, determine two location end points, locate the terminal position of these two limbs of this object according to this two tentative end points.
Object end positioning system with the object end positioning method of application drawing 1 is that example specifies as follows.Please be simultaneously with reference to Fig. 2, and Fig. 3~7.Fig. 2 illustrates the calcspar into the object end positioning system 200 of the object end positioning method of application drawing 1.Fig. 3~7 illustrate an example of the multiple image that when carrying out object end positioning method, is produced for object end positioning system 200.
Object end positioning system 200 can be located the terminal position Ft of human body both feet F, and is as shown in Figure 3.Object end positioning system 200 comprises an acquisition unit 210, a processing unit 220, a matching unit 230, locating unit 240, an and tracing unit 250.
Acquisition unit 210 is in order to obtain a raw video Im1.As shown in Figure 3, this raw video Im1 is the image information that has corresponding to human body both feet F.
Processing unit 220 is in order to carry out perspective process to raw video Im1, to obtain a prospect image.When processing unit 220 carries out perspective process to this raw video Im1, for example to this raw video Im1 is carried out edge detection.So, processing unit 220 can be obtained the prospect image Im2 of the marginal information with raw video Im1, and this prospect image Im2 will have the profile F2 of both feet, and is as shown in Figure 4.
When raw video Im1 was carried out perspective process, the prospect image Im2 that is obtained can comprise the profile of scenario objects usually, like profile F2 and other contours of objects A and the B of both feet.Therefore, processing unit 220 can filter prospect image Im2, to keep the profile F2 of both feet.Among the Yu Shizuo, because the profile F2 of both feet is generally the block with maximum area, so processing unit 220 just can be by the block area of statistics these a little profile F2, A and B, the profile F2 that finds out both feet also keeps.
Then, as shown in Figure 5, processing unit 220 is in addition in order to obtain a plurality of turning point c1~cn according to prospect image Im2.The online polygon curve F3 that forms of these a little turning point c1~cn, the shape of this polygon curve F3 is comparable to the profile F2 of both feet in fact.
Afterwards, processing unit 220 is in order to according to each turning point and corresponding two adjacent formed angles of turning point, determines a plurality of salient points and a plurality of concave point with a little turning point c1~cn from then on.
Salient point and concave point for example definable are following.The pairing angle of turning point that is defined as salient point is the angle between 0~120 degree; And the pairing angle of the turning point that is defined as concave point is the angle greater than 240 degree.In above-mentioned definition, angle is the medial angle of polygon curve F3.So, as shown in Figure 5, turning point c2 and corresponding adjacent two turning point c1 and the formed angle of c3 meet the definition of salient point, and turning point c3 and corresponding adjacent two turning point c2 and the formed angle of c4 meet the definition of concave point.
For instance, determine a plurality of salient points and a plurality of concave point after, processing unit 220 is selected four selected salient point a1~a4 and two selected concave point b1 and b2 along a predetermined direction.As shown in Figure 6, this predetermined direction for example is the direction D1 of the end of double-legged F with respect to the top of double-legged F.
Then; Matching unit 230 select selected salient point a1~a4 its two as two tentative end points t1 and t2, the corresponding triangle of the double-legged F of human body among the online formation of two tentative end points t1 that selected and t2 and a selected concave point between two tentative end points t1 and t2 and the raw video Im1.As shown in Figure 7, two selected salient point a1 and a2 and a therebetween selected concave point b1 form a triangle, and this triangle is oriented the position of the double-legged F of human body in fact.
Further, its that select selected salient point a1~a4 when matching unit 230 is two during as two tentative end points t1 and t2, and matching unit 230 judges whether these a little selected salient point a1~a4 and this a little selected concave point b1 and b2 meet the triangle characteristic matching.
So-called triangle characteristic matching is meant that matching unit 230 is according to the vector perpendicular to predetermined direction D1, and whether its online slope of two of judging these a little selected salient point a1~a4 is less than a predetermined slope.Whether and matching unit 230 judges that also the one of which of these a little selected concave point b1 and b2 is projected to the position on the vector, between its two position of being projected on the vector of these a little selected salient point a1~a4.
For instance, in Fig. 7, above-mentioned predetermined slope for example is 45 ° a slope.The online slope S 1 of selected salient point a1 and a2 is less than this predetermined slope, and selected concave point b1 is projected to the position d1 on the vectorial D2, is projected between the position d2 and d3 on the vectorial D2 at two selected salient point a1 and a2.Therefore, matching unit 230 judges that selected salient point a1 and a2 and selected concave point b1 meet the triangle characteristic matching.
Moreover as shown in Figure 6, if be defined as the below near the side of the terminal position Ft of double-legged F, then predetermined direction D1 is by the direction of below to top.After matching unit 230 judged that selected salient point a1 and a2 and selected concave point b1 meet the triangle characteristic matching, matching unit 230 is also judged selected salient point a1 and a2, and whether more selected concave point b1 was near the below.In example shown in Figure 7, matching unit 230 will judge that two selected salient point a1 and the more selected concave point b1 of a2 are near the below.
Further; Matching unit 230 is judging whether selected salient point a1~a4 and selected concave point b1 and b2 meet in the process of triangle characteristic matching; If matching unit 230 meets the triangle characteristic matching except judging selected salient point a1 and a2 and selected concave point b1, judge that also two selected salient points also meet the triangle characteristic matching with a corresponding selected concave point in addition.At this moment, matching unit 230 is also in order to judge that whether selected salient point a1 and a2 and the formed area of selected concave point b1 are greater than other two selected salient points and a corresponding formed area of selected concave point.
In example shown in Figure 7; Suppose that matching unit 230 selected salient point a1 of judgement and a2 and selected concave point b1 meet the triangle characteristic matching; And whether more selected concave point b1 is near the below to judge selected salient point a1 and a2, judges that also selected salient point a1 and a2 and the formed area of selected concave point b1 are for maximum.So, matching unit 230 decisions select this two selected salient point a1 and a2 as two tentative end points t1 and t2.
Please with reference to aforesaid Fig. 2, matching unit 230 with selected salient point a1 and a2 as two tentative end points t1 and t2 after, then, positioning unit 240 is in order to determine two location end points Px and Py according to two tentative end points t1 and t2.Can know by Fig. 7, because the position of selected salient point a1 that matching unit 230 is determined and a2 is essentially the terminal position Ft of double-legged F.So, two location end points Px and the Py that determined of positioning unit 240 can orient the terminal position Ft of the double-legged F of human body.
In above-mentioned explanation, object end positioning system 200 is to analyze and judge the information of image with the mode of image processing, orients the terminal position of the both feet of human body.Make the user wear the system that special device or clothes are located the both feet of human body compared to need.Object end positioning system 200 of the present invention need not make the user wear special device or clothes.So, can improve user's ease of use, and, do not have the problem of the use wish that reduces the user yet.
In addition, when matching unit 230 judged that selected salient point a1~a4 and selected concave point b1 and b2 do not meet the triangle characteristic matching, then matching unit 230 will activation tracing unit 250.Receive the tracing unit 250 of activation to obtain two tracking end points t1 ' and t2 ' according to two first forward terminals.This two first forward terminal is the terminal position of two limbs of positioning unit 240 object of in a previous raw video, being located.
In more detail, the terminal position of the both feet of being located in last raw video of tracing unit 250 traceable elder generations produces two tracking end points t1 ' and the t2s ' close with two actual location end points.So; Even positioning unit 240 can't be through matching unit 230 two location end points Px and the Py that judge rightly out; Positioning unit 240 also can two be followed the trail of end points t1 ' and t2 ' according to what tracing unit 250 was provided, decides two location end points Px and the Pys close with two actual location end points.So, can improve the operational stability of object end positioning system 200.
In many raw videos that acquisition unit 210 is obtained in regular turn; (for example: the displacement double-legged F of human body) can be in certain scope, so the variation of the terminal position of two limbs that positioning unit 240 is located (for example: the variation of the terminal position Ft of double-legged F) also can be in certain scope owing to object.The old friend; If matching unit 240 can't go out to locate the terminal position Ft of the double-legged F of human body in a certain raw video; Just can find out two tracking end points t1 ' and the t2s ' close by the terminal position Ft of the double-legged F that is located in last raw video of tracing unit 250 tracking elder generations with two actual location end points.How to obtain two with a plurality of example shows tracing units 250 and follow the trail of end points t1 ' and t2 '.
In first example; When tracing unit 250 is obtained two tracking end points t1 ' and t2 '; Tracing unit 250 is in previous raw video and raw video Im, and the brightness that accordings to the surrounding pixel of two first forward terminals changes, and decides two to follow the trail of end points t1 ' and t2 '.
In second example; When tracing unit 250 is obtained two tracking end points t1 ' and t2 '; Tracing unit 250 is in previous raw video and raw video Im, accordings to the change color of the surrounding pixel of two first forward terminals, decides two to follow the trail of end points t1 ' and t2 '.
In the 3rd example; When tracing unit 250 is obtained two tracking end points t1 ' and t2 '; Tracing unit 250 is according to two first forward terminals and the two first forward terminals institute position of representative respectively in addition, with the mode of prediction or probability, decides two tracking end points t1 ' and t2 '.In addition two first forward terminals are the terminal position of two limbs of positioning unit 240 object of in another previous raw video, being located, and raw video Im, previous raw video and another previous raw video are to be obtained by acquisition unit 210 continuously.
In first and second above-mentioned example,, decide two to follow the trail of end points t1 ' and t2 ' respectively by brightness, change color in raw video Im and the previous raw video.And in the 3rd example, then be the mode by prediction or probability, decide two to follow the trail of end points t1 ' and t2 '.These a little examples are the usefulness in order to explanation the present invention, are not in order to restriction the present invention.As long as can be when matching unit 230 judges that selected salient point a1~a4 and selected concave point b1 and b2 do not meet the triangle characteristic matching; Utilize tracing unit 250 to produce two tracking end points t1 ' and the t2s ' close, all in protection scope of the present invention with two actual location end points.
Object end positioning system 200 of the present invention just can determine two location end points Px and Py after obtaining a raw video Im1, orient the terminal position Ft of human body both feet F.Further, if object end positioning system 200 can obtain many raw videos in regular turn in a period, just can in these a little raw videos, go out to orient the terminal position of human body both feet in regular turn.So, object end positioning system 200 just can further be analyzed these a little positional informations, judges moving direction, translational speed or the action kenel of human body both feet F in this period.
Moreover above-mentioned explanation is that the terminal position Ft that can locate human body both feet F with object end positioning system 200 is an example, so also is not limited thereto.Object end positioning system 200 of the present invention also can be located the terminal position of two fingers of human body.At this moment, above-mentioned predetermined direction will be the direction of the end of finger with respect to the top of finger.That is the object end positioning system 200 of this moment is along the direction of the end of pointing with respect to the top of finger, selects selected salient point and selected concave point.So, object end positioning system 200 also can select two selected salient points to be used as two tentative end points, and this two tentative end points points a corresponding triangle with the online formation of a therebetween selected concave point with two, thereby orients two terminal positions pointed.
The present invention can locate the terminal position of two limbs of an object with the system of described object end positioning method of the foregoing description and application.Two limbs of this object can be the both feet or two fingers of human body.Through this object end positioning method and use it and put, the user need not wear special device or clothes.So, can improve user's ease of use, and, do not have the problem of the use wish that reduces the user yet.
In sum, though the present invention describes as above with preferred embodiment, so it is not in order to limit the present invention.Those skilled in the art are not breaking away from the spirit and scope of the present invention, when doing various changes and retouching.Therefore, protection scope of the present invention is as the criterion when the content that the claim scope that look application is defined.

Claims (20)

1. object end positioning method, in order to the terminal position of two limbs of locating an object, this method comprises:
Obtain a raw video, this raw video has the image information corresponding to this object;
This raw video is carried out perspective process, and to obtain a prospect image, this prospect image corresponds to the profile of this object;
According to this prospect image, obtain a plurality of turning points, those turning points online form with this raw video in the close in fact polygon curve of profile of this object;
According to this turning point respectively and corresponding two adjacent formed angles of turning point, determining a plurality of salient points and a plurality of concave point from those turning points, and select a plurality of selected salient points and a plurality of selected concave point along a predetermined direction;
Its that select those selected salient points be two as two tentative end points, the corresponding triangle of these two limbs of this object in this two tentative end points selected and the online formation of a selected concave point between this two tentative end points and this raw video; And
Determine two location end points according to this two tentative end points, locate the terminal position of these two limbs of this object.
The method of claim 1, wherein this predetermined direction be relevant to this object in this raw video the end of these two limbs with respect to the direction on the top of these two limbs.
3. the method for claim 1, wherein select before this its two steps as these two tentative end points of those selected salient points, this method comprises:
Judge whether those selected salient points and those selected concave points meet the triangle characteristic matching; And
If those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching, then determine with those selected salient points this its two as this two tentative end points.
4. method as claimed in claim 3 wherein, comprises in this its two steps as these two tentative end points of decision with those selected salient points:
If those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching; And those selected salient points in addition two and another of those selected concave points also meet the triangle characteristic matching; Then judge those selected salient points this its two and the formed area of this one of which of those selected concave points; Whether greater than those selected salient points this in addition two and this another formed area of those selected concave points; If those selected salient points this its two and the formed area of this one of which of those selected concave points bigger, then carry out this decision with those selected salient points this its two as this two fix tentatively end points steps.
5. method as claimed in claim 3, wherein, judge that the step whether those selected salient points and those selected concave points meet the triangle characteristic matching comprises:
Vector according to vertical this predetermined direction; Whether this its online slope of two of judging those selected salient points is less than a predetermined slope; Whether and this one of which of judging those selected concave points is projected to the position on this vector, between this its two positions of being projected on this vector of those selected salient points.
6. method as claimed in claim 3; Wherein, If the side near the terminal position of this two limbs of this object is defined as the below, then this predetermined direction is the direction by below to top, and decision comprises this its two steps as these two tentative end points of those selected salient points:
Judge those selected salient points this its two whether than this one of which of those selected concave points near the below, if then carry out this its two the steps as this two tentative end points of this decision with those selected salient points.
7. method as claimed in claim 3 comprises:
Obtain two according to two first forward terminals and follow the trail of end points, this two first forward terminal is the terminal position of two limbs of this object of in a previous raw video, being located;
Wherein when those selected salient points and those selected concave points did not meet the triangle characteristic matching, then the step according to this this two location end points of two tentative end points decisions replaced with:
According to this two trackings end points, decide this two location end points.
8. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
In this previous raw video and this raw video, the brightness that accordings to the surrounding pixel of this two first forward terminal changes, and decides this two trackings end points.
9. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
In this previous raw video and this raw video, according to the change color of the surrounding pixel of this two first forward terminal, decide this two trackings end points.
10. method as claimed in claim 7, wherein, the step that obtains this two trackings end points comprises:
According to this two first forward terminal and the two first forward terminals institute position of representative respectively in addition,, decide this two trackings end points with the mode of prediction or probability;
Wherein, two first forward terminals lie in the terminal position of two limbs of this object of being located in another previous raw video in addition, and this raw video, this previous raw video and this another previous raw video are obtained continuously.
11. an object end positioning system, in order to the terminal position of two limbs of locating an object, this system comprises:
One acquisition unit, in order to obtain a raw video, this raw video has the image information corresponding to this object;
One processing unit; In order to this raw video is carried out perspective process; To obtain a prospect image, this prospect image corresponds to the profile of this object, and this processing unit is in addition in order to obtain a plurality of turning points according to this prospect image; Those turning points online form with this raw video in the close in fact polygon curve of profile of this object; This processing unit is in order to according to this turning point respectively and corresponding two adjacent formed angles of turning point, determining a plurality of salient points and a plurality of concave point from those turning points, and selects a plurality of selected salient points and a plurality of selected concave point along a predetermined direction;
One matching unit; In order to select those selected salient points its two as two tentative end points, the corresponding triangle of these two limbs of this object in this two tentative end points selected and the online formation of a selected concave point between this two tentative end points and this raw video; And
Locating unit in order to determine two location end points according to this two tentative end points, is located the terminal position of these two limbs of this object.
12. system as claimed in claim 11, wherein, this predetermined direction is to be relevant to the end of these two limbs of this object in this raw video with respect to the direction on the top of these two limbs.
13. system as claimed in claim 11; Wherein, When this matching unit select those selected salient points this its two during as this two tentative end points; This matching unit judges whether those selected salient points and those selected concave points meet the triangle characteristic matching, if this matching unit judge those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching, then this matching unit decision select those selected salient points this its two as this two tentative end points.
14. system as claimed in claim 13; Wherein, If this matching unit judge those selected salient points this its two and this one of which of those selected concave points meet the triangle characteristic matching; And judge those selected salient points in addition two and another of those selected concave points also meet the triangle characteristic matching; Then this matching unit in order to judge those selected salient points this its two and the formed area of this one of which of those selected concave points; Whether greater than those selected salient points this in addition two and this another formed area of those selected concave points, if those selected salient points this its two and the formed area of this one of which of those selected concave points bigger, then this matching unit decision with those selected salient points this its two as this two tentative end points.
15. system as claimed in claim 13; Wherein, When this matching unit judged whether those selected salient points and those selected concave points meet the triangle characteristic matching, according to perpendicular to a vector of a predetermined direction, whether this its online slope of two of judging those selected salient points was less than a predetermined slope; Whether and this one of which of judging those selected concave points is projected to the position on this vector, between this its two positions of being projected on this vector of those selected salient points.
16. system as claimed in claim 13; Wherein, If the side near the terminal position of this two limbs of this object is defined as the below, then this predetermined direction is the direction by below to top, and those selected salient points this its two than this one of which of those selected concave points near the below.
17. system as claimed in claim 13, wherein, this matching unit comprises:
One tracing unit is followed the trail of end points, the terminal position of two limbs of this object that this two first forward terminal is located in a previous raw video for this positioning unit in order to obtain two according to two first forward terminals;
Wherein when this matching unit judged that those selected salient points and those selected concave points do not meet the triangle characteristic matching, this positioning unit replacement determined two location end points according to this two tentative end points, and two followed the trail of end points according to this, decided this two location end points.
18. system as claimed in claim 17; Wherein, when this tracing unit was obtained this two trackings end points, this tracing unit was in this previous raw video and this raw video; The brightness that accordings to the surrounding pixel of this two first forward terminal changes, and decides this two trackings end points.
19. system as claimed in claim 17; Wherein, when this tracing unit was obtained this two trackings end points, this tracing unit was in this previous raw video and this raw video; The change color that accordings to the surrounding pixel of this two first forward terminal decides this two trackings end points.
20. system as claimed in claim 17; Wherein, when this tracing unit is obtained this two when following the trail of end points, this tracing unit is according to this two first forward terminal and the position represented respectively of two first forward terminals institute in addition; With the mode of prediction or probability, decide this two trackings end points;
Wherein, the terminal position of two limbs of this object that these other two first forward terminals are located in another previous raw video for this positioning unit, this raw video, this previous raw video and this another previous raw video are to be obtained by this acquisition unit continuously.
CN2008101856396A 2008-12-17 2008-12-17 Object end positioning method and applied system thereof Active CN101751118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101856396A CN101751118B (en) 2008-12-17 2008-12-17 Object end positioning method and applied system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101856396A CN101751118B (en) 2008-12-17 2008-12-17 Object end positioning method and applied system thereof

Publications (2)

Publication Number Publication Date
CN101751118A CN101751118A (en) 2010-06-23
CN101751118B true CN101751118B (en) 2012-02-22

Family

ID=42478166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101856396A Active CN101751118B (en) 2008-12-17 2008-12-17 Object end positioning method and applied system thereof

Country Status (1)

Country Link
CN (1) CN101751118B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524637A (en) * 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
US6308565B1 (en) * 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
CN1991691A (en) * 2005-12-30 2007-07-04 财团法人工业技术研究院 Interactive control platform system
CN101140491A (en) * 2006-09-07 2008-03-12 王舜清 Digital image cursor movement and positioning apparatus system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524637A (en) * 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
US6308565B1 (en) * 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
CN1991691A (en) * 2005-12-30 2007-07-04 财团法人工业技术研究院 Interactive control platform system
CN101140491A (en) * 2006-09-07 2008-03-12 王舜清 Digital image cursor movement and positioning apparatus system

Also Published As

Publication number Publication date
CN101751118A (en) 2010-06-23

Similar Documents

Publication Publication Date Title
US7692627B2 (en) Systems and methods using computer vision and capacitive sensing for cursor control
EP3007039B1 (en) Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand
CN103164022B (en) Many fingers touch method and device, portable terminal
KR101477592B1 (en) Camera-based information input method and terminal
US9696812B2 (en) Apparatus and method for processing user input using motion of object
CN103984928A (en) Finger gesture recognition method based on field depth image
US8659547B2 (en) Trajectory-based control method and apparatus thereof
US20050281467A1 (en) Recognizing multi-stroke symbols
CN103389799A (en) Method for tracking motion trail of fingertip
US10366281B2 (en) Gesture identification with natural images
EP2630616A1 (en) Method and apparatus for providing hand detection
EP2528035A2 (en) Apparatus and method for detecting a vertex of an image
Kim et al. An adaptive local binary pattern for 3D hand tracking
JP6452369B2 (en) Information processing apparatus, control method therefor, program, and storage medium
CN103092334A (en) Virtual mouse driving device and virtual mouse simulation method
Saikia et al. Head gesture recognition using optical flow based classification with reinforcement of GMM based background subtraction
Yang et al. An effective robust fingertip detection method for finger writing character recognition system
CN101751118B (en) Object end positioning method and applied system thereof
Pun et al. Real-time hand gesture recognition using motion tracking
Bulbul et al. A color-based face tracking algorithm for enhancing interaction with mobile devices
Lee et al. Fast hand and finger detection algorithm for interaction on smart display
Ukita et al. Wearable virtual tablet: fingertip drawing on a portable plane-object using an active-infrared camera
KR20030073879A (en) Realtime face detection and moving tracing method
Zhang et al. Near-field touch interface using time-of-flight camera
CN103558948A (en) Man-machine interaction method applied to virtual optical keyboard

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant