|Publication number||US8488888 B2|
|Application number||US 12/979,897|
|Publication date||16 Jul 2013|
|Filing date||28 Dec 2010|
|Priority date||28 Dec 2010|
|Also published as||CN102591459A, CN102591459B, US20120163723|
|Publication number||12979897, 979897, US 8488888 B2, US 8488888B2, US-B2-8488888, US8488888 B2, US8488888B2|
|Inventors||Alexandru Balan, Matheen Siddiqui, Ryan M. Geiss, Alex Aben-Athar Kipman, Oliver Michael Christian Williams, Jamie Shotton|
|Original Assignee||Microsoft Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (200), Non-Patent Citations (41), Referenced by (10), Classifications (12), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
Controller-free interactive systems, such as gaming systems, may be controlled at least partially by natural movements. In some examples, such systems may employ a depth sensor, or other suitable sensor, to estimate motion of a user and translate the estimated motions into commands to a console of the system. However, in estimating the motions of a user, such systems may only estimate major joints of the user, e.g., via skeleton estimation, and may lack the ability to detect subtle gestures.
Accordingly, various embodiments directed to estimating a posture of a body part of a user are disclosed herein. For example, in one disclosed embodiment, an image is received front a sensor, where the image includes at least a portion of at image of the user including the body part. The skeleton information of the user is estimated from the image, a region of the image corresponding to the body part is identified at least partially based on the skeleton information, and a shape descriptor is extracted for the region and the shape descriptor is classified based on training data to estimate the posture of the body part. A response then may be output based on the estimated posture of the body part.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Controller-free interactive systems, e.g., gaming systems, such as shown at 10 in
However, motion estimation routines such as skeleton mapping may lack the ability to detect subtle gestures of a user. For example, such motion estimation routines may lack the ability to detect and/or distinguish subtle hand gestures such as a user's open and closed hands, shown at 22 and 24 in
Accordingly, systems and methods, described below herein, are directed to determining a state of a hand of a user. For example, the action of closing and opening the hand may be used by such systems for triggering events such as selecting, engaging, or grabbing and dragging objects, e.g., object 26, on the screen, actions which otherwise would correspond to pressing a button when using a controller. Such refined controller-free interaction can be used as an alternative to approaches based on hand waving or hovering, which may be unintuitive or cumbersome. By determining states of a user's hand as described below herein, interactivity of a user with the system may be increased and simpler and more intuitive interfaces may be presented to a user.
At 202, method 200 includes receiving a depth image from a capture device, e.g., capture device 12 shown in
A depth image of a portion of a user is illustrated in
At 204, method 200 includes estimating skeleton information of the user to obtain a virtual skeleton from a depth image obtained in step 202. For example, in
The virtual skeleton 304 may include a plurality of joints, each joint corresponding to a portion of the user. The illustration in
As remarked above, current motion estimation from depth images, such as skeleton estimating described above, may lack the ability to detect subtle gestures of a user. For example, such motion estimation routines may lack the ability to detect and/or distinguish subtle hand gestures such as a user's open and closed hands, shown at 22 and 24 in
However, such an estimated skeleton may be used to estimate a variety of other physical attributes of the user. For example, the skeleton data may be used to estimate user body and/or body part size, orientation of one or more user body parts with respect to each other and/or the capture device, depth of one or more user body parts relative to the capture device, etc. Such estimations of physical attributes of a user then may be employed to normalize and reduce variability in detecting and classifying states of a user's hands, as described below.
At 206, method 200 includes segmenting a hand or hands of the user. In some examples, method 200 may additionally include segmenting one or more regions of the body in addition to the hands.
Segmenting a hand of a user includes identifying a region of the depth image corresponding to the hand, where the identifying is at least partially based on the skeleton information obtained in step 204. Likewise, any region of the body of a user may be identified in a similar manner as described below. At 306,
Hands or body regions may be segmented or localized in a variety of ways and may be based on selected joints identified in the skeleton estimation described above.
As one example, hand detection and localization in the depth image may be based on the estimated wrist and/or hand tip joints from the estimated skeleton. For example, in some embodiments, hand segmentation in the depth image may be performed using a topographical search of the depth image around the hand joints, locating nearby local extrema in the depth image as candidates for finger tips, and segmenting the rest of the hand by taking into account a body size scaling factor as determined from the estimated skeleton, as well as depth discontinuities for boundary identification.
As another example, a flood-fill approach may be employed to identify regions of the depth image corresponding to a user's hands. In a flood-fill approach, the depth image may be searched from a starting point and a starting direction, e.g., the starting point may be the wrist joint and the starting direction may be a direction from the elbow to the wrist joint. Nearby pixels in the depth image may be iteratively scored based on the projection on the starting direction as a way for giving preference to points moving away from the elbow and towards the hand tip, while depth consistency constraints such as depth discontinuities may be used to identify boundaries or extreme values of a user's hands in the depth image. In some examples, threshold distance values may be used to limit the depth map search in both the positive and negative directions of the starting direction based on fixed values or scaled based on an estimated size of the user, for example.
As still another example, a bounding sphere or other suitable bounding shape, positioned based on skeleton joints (e.g. wrist or hand tip joints), may be used to include all pixels in the depth image up to a depth discontinuity. For example, a window may be slid over the bounding sphere to identify depth discontinuities which may be used to establish a boundary in the hand region of the depth image.
In some approaches, segmenting of hand regions may be performed when a user raises the hands outward or above the torso. In this way, identification of hand regions in the depth image may be less ambiguous since the hand regions may be distinguished from the body more easily.
It should be understood that the example hand segmentation examples described above are presented for the purpose of example and are not intended to limit the scope of this disclosure. In general, any hand or body part segmentation method may be used alone or in combination with each other and/or one of she example methods described above.
Continuing with method 200 in
In some examples, the shape descriptor may be invariant to one or more transformations, such as congruency (translation, rotation, reflection, etc.), isometry, depth changes, etc. For example, the shape descriptor may be extracted in such a way as to be invariant to an orientation and location of the hand with respect to the capture device or sensor. A shape descriptor can also be made invariant to reflection, in which case it does not distinguish between the left and right hand. Further, if a shape descriptor is not invariant to reflection, it can always be mirrored by flipping the input image left-right, thereby doubling the amount of training data for each hand. Further, the shape descriptor may be normalized based on an estimated body size so as to be substantially invariant to body and or hand size differences between different users. Alternatively, a calibration step may be performed in advance where the scale of the person is pre-estimated, in which case the descriptor need not be size invariant.
As one example of shape descriptor extraction, a histogram of distances in the hand region identified in step 206 from the centroid of the hand region may be constructed. For example, such a histogram may include fifteen bins, where each bin includes the number of points in the hand region whose distance to the centroid is within a certain distance range associated with that bin. For example, the first bin in such a histogram may include the number of points in the hand region whose distance to the centroid is between 0 and 0.40 centimeters, the second bin includes the number of points in the hand region whose distance to the centroid is between 0.4 and 0.80 centimeters, and so forth. In this way, a vector may be constructed to codify the shape of the hand. Such vectors may further be normalized based on estimated body size, for example.
In another example approach, a histogram may be constructed based on distances and/or angles from points in the hand region to a joint, bone segment or palm plane from the user's estimated skeleton, e.g., the elbow joint, wrist joint, etc.
Another example of a shape descriptor is a Fourier descriptor. Construction of a Fourier descriptor may include codifying a contour of the hand region, e.g., via mapping a distance from each pixel in the hand region to a perimeter of the hand region against a radius of an elliptical fitting of the boundary of the hand and then performing a Fourier transform on the map. Further, such descriptors may be normalized, e.g., relative to an estimated body size. Such descriptors may be invariant to translation, scale, and rotation.
Still another example of constructing a shape descriptor includes determining a convexity of the hand region, e.g., by determining a ratio of an area of a contour of the hand region to the convex hull of the hand region.
It should be understood that the example shape descriptors described above are exemplary in nature and are not intended to limit the scope of this disclosure. In general, any suitable shape descriptor for a hand region may be used alone or in combination with each other and/or one of the example methods described above. For example, shape descriptors, such as the histograms or vectors described above, may be mixed and matched, combined, and/or concatenated into larger vectors, etc. This may allow the identification of new patterns that were not identifiable by looking at them in isolation.
Continuing with method 200, at 210, method 200 includes classifying the state of the hands. For example, the shape descriptor extracted at step 208 may be classified based on training data to estimate the state of the hand. For example, as illustrated at 310 in
In some examples, the training data used in the classification step 210, may be based on a pre-determined set of hand examples. The hand examples may be grouped or labeled based on a representative hand state against which the shape descriptor for the hand region is compared.
In some examples, various eta-data may be used to partition the training data. For example, the training data may include a plurality of hand state examples which may be partitioned based on one or more of hand side (e.g., left or right), hand orientation (e.g., lower arm angle or lower arm orientation), depth, and/or a body size of the user, for example. Partitioning of these training hand examples into separate subsets may reduce variability in hand shape within each partition which may lead to more accurate overall classification of hand state.
Additionally, in some examples, the training data may be specific to a particular application. That is, the training data may depend on expected actions in a given application, e.g., an expected activity in a game, etc. Further, in some examples, the training data may be user specific. For example, an application or game may include a training module wherein a user performs one or more training exercises to calibrate the training data. For example, a user may perform a sequence of open and closed hand postures to establish a training data set used in estimating user hand states during a subsequent interaction with the system.
Classification of a user's hand may be performed based on training examples in a variety of ways. For example, various machine learning techniques may be employed in the classification. Non-limiting examples include: support vector machine training, regression, nearest neighbor, (un)supervised clustering, etc.
As remarked above, such classification techniques may use labeled depth image examples of various hand states for predicting the likelihood of an observed hand as being in one of several states. Additionally, confidences may be added to a classification either during or following the classification step. For example, confidence intervals may be assigned to an estimated hand state based on the training data or by fitting a sigmoid function, or other suitable error function, to the output of the classification step.
As a simple, non-limiting example of classifying a hand state, there may be two possible hand states, open or closed, such as shown at 310 in
For example, as shown in
Various post-classification filtering steps may be employed to increase accuracy of the hand state estimations. Thus, at 211, method 200 may include a filtering step. For example, a temporal-consistency filtering, e.g., a low-pass filter, step may be applied to predicted hand states between consecutive depth image frames to smooth the predictions and reduce temporal uttering, e.g., due to spurious hand movements, sensor noise, or occasional classification errors. That is, a plurality of states of a user's hand based on a plurality of depth images from the capture device or sensor may be estimated and temporal filtering of the plurality of estimates to estimate the state of the hand may be performed. Further, in some examples, classification results may be biased toward one state or another (e.g., towards open or closed hands), as some applications may be more sensitive to false positives (in one direction or another) than other applications.
Continuing with method 200, at 212 method 200 includes outputting a response based on the estimated hand state. For example, a command may be output to a console of a computing system, such as console 16 of computing system 10. As another example, a response may be output to a display device, such as display device 20. In this way, estimated motions of the user, including estimated hand states may be translated into commands to a console 16 of the system 10, so that the user may interact with the system as described above. Further, the method and processes described above may be implemented to determine estimates of states of any part of a user's body, e.g., mouth, eyes, etc. For example, a posture of a body part of a user may be estimated using the methods described above.
The methods and processes described herein may be tied to a variety of different types of computing systems. The computing system 10 described above is a nonlimiting example system which includes a gaming console 16, display device 20, and capture device 12. As another, more general, example,
Computing system 400 may include a logic subsystem 402, a data-holding subsystem 404 operatively connected to the logic subsystem, a display subsystem 406, and/or a capture device 408. The computing system may optionally include components not shown in
Logic subsystem 402 may include one or more physical devices configured to execute one or more instructions. For example, logic subsystem 402 may be configured to execute one or more instructions that are part of one or more programs, routines, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result. Logic subsystem 402 may include one or more processors that are configured to execute software instructions. Additionally or alternatively, logic subsystem 402 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Logic subsystem 402 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located in some embodiments.
Data-holding subsystem 404 may include one or more physical devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 404 may be transformed (e.g., to hold different data). Data-holding subsystem 404 may include removable media and/or built-in devices. Data-holding subsystem 704 may include optical storage devices, semiconductor memory and storage devices (e.g., RAM, EEPROM, flash, etc.), and/or magnetic storage devices, among others. Data-holding subsystem 404 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 402 and data-holding subsystem 404 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
Display subsystem 406 may be used to present a visual representation of data held by data-holding subsystem 404. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 406 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 406 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 402 and/or data-holding subsystem 404 in a shared enclosure, or such display devices may be peripheral display devices.
Computing system 400 further includes a capture device 408 configured to obtain depth images of one or more targets and/or scenes. Capture device 408 may be configured to capture video with depth information via any suitable technique (e.g., time-of-flight, structured light, stereo image, etc.). As such, capture device 408 may include a depth camera, a video camera, stereo cameras, and/or other suitable capture devices.
For example, in time-of-flight analysis, the capture device 408 may emit infrared light to the scene and may then use sensors to detect the backscattered light from the surfaces of the scene. In some cases, pulsed infrared light may be used, wherein the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device to a particular location on the scene. In some cases, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift, and the phase shift may be used to determine a physical distance from the capture device to a particular location in the scene.
In another example, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device to a particular location in the scene by analyzing the intensity of the reflected beam of light over time via a technique such as shuttered light pulse imaging.
In another example, structured light analysis may be utilized by capture device 408 to capture depth information. In such an analysis, patterned light (e.g., light displayed as a known pattern such as a grid pattern or a stripe pattern) may be projected onto the scene. On the surfaces of the scene, the pattern may become deformed, and this deformation of the pattern may be studied to determine a physical distance from the capture device to a particular location in the scene.
In another example, the capture device may include two or more physically separated cameras that view a scene from different angles, to obtain visual stereo data. In such cases, the visual stereo data may be resolved to generate a depth image.
In other embodiments, capture device 408 may utilize other technologies to measure and/or calculate depth values.
In some embodiments, two or more different cameras may be incorporated into an integrated capture device. For example, a depth camera and a video camera (e.g., RGB video camera) may be incorporated into a common capture device. In some embodiments, two or more separate capture devices may be cooperatively used. For example, a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of scene analysis, image capture, face recognition, high-precision tracking of fingers or other small features), light sensing, and/or other functions. In some embodiments, two or more depth and/or RGB cameras may be placed on different sides of the subject to obtain a more complete 3D model of the subject or to further refine the resolution of the observations around the hands. In other embodiments, a single camera may be used, e.g., to obtain an RGB image, and the image may be segmented based on color, e.g., based on a color of a hand.
It is to be understood that at least some depth analysis operations may be executed by a logic machine of one or more capture devices. A capture device may include one or more onboard processing units configured to perform one or more depth analysis functions. A capture device may include firmware to facilitate updating such onboard processing logic.
Computing system 400 may further include various subsystems configured to execute one or more instructions that are part of one or more programs, routines, objects, components, data structures, or other logical constructs. Such subsystems may be operatively connected to logic subsystem 402 and/or data-holding subsystem 404. In some examples, such subsystems may be implemented as software stored on a removable or non-removable computer-readable storage medium.
For example, computing system 400 may include an image segmentation subsystem 410 configured to identify a region of the depth image corresponding to the hand, the identifying being at least partially based on the skeleton information. Computing system 400 may additionally include a descriptor extraction subsystem 412 configured to extract a shape descriptor for a region identified by image segmentation subsystem 410. Computing system 400 may further include a classifier subsystem 414 configured to classify the shape descriptor based on training data to estimate the state of the hand.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
Additionally, it should be understood that the examples of open and closed hand detection described here are exemplary in nature and are not intended to limit the scope of this disclosure. The methods and systems described herein may be applied to estimating a variety of refined gestures in a depth image. For example, various other hand profiles may be estimated using the systems and methods described herein. Non-limiting examples include: fist postures, open palm postures, pointing fingers, etc.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4627620||26 Dec 1984||9 Dec 1986||Yang John P||Electronic athlete trainer for improving skills in reflex, speed and accuracy|
|US4630910||16 Feb 1984||23 Dec 1986||Robotic Vision Systems, Inc.||Method of measuring in three-dimensions at high speed|
|US4645458||15 Apr 1985||24 Feb 1987||Harald Phillip||Athletic evaluation and training apparatus|
|US4695953||14 Apr 1986||22 Sep 1987||Blair Preston E||TV animation interactively controlled by the viewer|
|US4702475||25 Jul 1986||27 Oct 1987||Innovating Training Products, Inc.||Sports technique and reaction training system|
|US4711543||29 Jan 1987||8 Dec 1987||Blair Preston E||TV animation interactively controlled by the viewer|
|US4751642||29 Aug 1986||14 Jun 1988||Silva John M||Interactive sports simulation system with physiological sensing and psychological conditioning|
|US4796997||21 May 1987||10 Jan 1989||Synthetic Vision Systems, Inc.||Method and system for high-speed, 3-D imaging of an object at a vision station|
|US4809065||1 Dec 1986||28 Feb 1989||Kabushiki Kaisha Toshiba||Interactive system and related method for displaying data to produce a three-dimensional image of an object|
|US4817950||8 May 1987||4 Apr 1989||Goo Paul E||Video game control unit and attitude sensor|
|US4843568||11 Apr 1986||27 Jun 1989||Krueger Myron W||Real time perception of and response to the actions of an unencumbered participant/user|
|US4893183||11 Aug 1988||9 Jan 1990||Carnegie-Mellon University||Robotic vision system|
|US4901362||8 Aug 1988||13 Feb 1990||Raytheon Company||Method of recognizing patterns|
|US4925189||13 Jan 1989||15 May 1990||Braeunig Thomas F||Body-mounted video game exercise device|
|US5101444||18 May 1990||31 Mar 1992||Panacea, Inc.||Method and apparatus for high speed object location|
|US5148154||4 Dec 1990||15 Sep 1992||Sony Corporation Of America||Multi-dimensional user interface|
|US5184295||16 Oct 1989||2 Feb 1993||Mann Ralph V||System and method for teaching physical skills|
|US5229754||11 Feb 1991||20 Jul 1993||Yazaki Corporation||Automotive reflection type display apparatus|
|US5229756||14 May 1992||20 Jul 1993||Yamaha Corporation||Image control apparatus|
|US5239463||9 Dec 1991||24 Aug 1993||Blair Preston E||Method and apparatus for player interaction with animated characters and objects|
|US5239464||9 Dec 1991||24 Aug 1993||Blair Preston E||Interactive video system providing repeated switching of multiple tracks of actions sequences|
|US5288078||16 Jul 1992||22 Feb 1994||David G. Capper||Control interface apparatus|
|US5295491||26 Sep 1991||22 Mar 1994||Sam Technology, Inc.||Non-invasive human neurocognitive performance capability testing method and system|
|US5320538||23 Sep 1992||14 Jun 1994||Hughes Training, Inc.||Interactive aircraft training system and method|
|US5347306||17 Dec 1993||13 Sep 1994||Mitsubishi Electric Research Laboratories, Inc.||Animated electronic meeting place|
|US5385519||19 Apr 1994||31 Jan 1995||Hsu; Chi-Hsueh||Running machine|
|US5405152||8 Jun 1993||11 Apr 1995||The Walt Disney Company||Method and apparatus for an interactive video game with physical feedback|
|US5417210||27 May 1992||23 May 1995||International Business Machines Corporation||System and method for augmentation of endoscopic surgery|
|US5423554||24 Sep 1993||13 Jun 1995||Metamedia Ventures, Inc.||Virtual reality game method and apparatus|
|US5454043||30 Jul 1993||26 Sep 1995||Mitsubishi Electric Research Laboratories, Inc.||Dynamic and static hand gesture recognition through low-level image analysis|
|US5469740||2 Dec 1992||28 Nov 1995||Impulse Technology, Inc.||Interactive video testing and training system|
|US5495576||11 Jan 1993||27 Feb 1996||Ritchey; Kurtis J.||Panoramic image based virtual reality/telepresence audio-visual system and method|
|US5516105||6 Oct 1994||14 May 1996||Exergame, Inc.||Acceleration activated joystick|
|US5524637||29 Jun 1994||11 Jun 1996||Erickson; Jon W.||Interactive system for measuring physiological exertion|
|US5534917||9 May 1991||9 Jul 1996||Very Vivid, Inc.||Video image based control system|
|US5563988||1 Aug 1994||8 Oct 1996||Massachusetts Institute Of Technology||Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment|
|US5577981||4 Aug 1995||26 Nov 1996||Jarvik; Robert||Virtual reality exercise machine and computer controlled video system|
|US5580249||14 Feb 1994||3 Dec 1996||Sarcos Group||Apparatus for simulating mobility of a human|
|US5594469||21 Feb 1995||14 Jan 1997||Mitsubishi Electric Information Technology Center America Inc.||Hand gesture machine control system|
|US5597309||28 Mar 1994||28 Jan 1997||Riess; Thomas||Method and apparatus for treatment of gait problems associated with parkinson's disease|
|US5616078||27 Dec 1994||1 Apr 1997||Konami Co., Ltd.||Motion-controlled video entertainment system|
|US5617312||18 Nov 1994||1 Apr 1997||Hitachi, Ltd.||Computer system that enters control information by means of video camera|
|US5638300||5 Dec 1994||10 Jun 1997||Johnson; Lee E.||Golf swing analysis system|
|US5641288||11 Jan 1996||24 Jun 1997||Zaenglein, Jr.; William G.||Shooting simulating process and training device using a virtual reality display screen|
|US5682196||22 Jun 1995||28 Oct 1997||Actv, Inc.||Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers|
|US5682229||14 Apr 1995||28 Oct 1997||Schwartz Electro-Optics, Inc.||Laser range camera|
|US5690582||1 Jun 1995||25 Nov 1997||Tectrix Fitness Equipment, Inc.||Interactive exercise apparatus|
|US5703367||8 Dec 1995||30 Dec 1997||Matsushita Electric Industrial Co., Ltd.||Human occupancy detection method and system for implementing the same|
|US5704837||25 Mar 1994||6 Jan 1998||Namco Ltd.||Video game steering system causing translation, rotation and curvilinear motion on the object|
|US5715834||16 May 1995||10 Feb 1998||Scuola Superiore Di Studi Universitari & Di Perfezionamento S. Anna||Device for monitoring the configuration of a distal physiological unit for use, in particular, as an advanced interface for machine and computers|
|US5774591||15 Dec 1995||30 Jun 1998||Xerox Corporation||Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images|
|US5875108||6 Jun 1995||23 Feb 1999||Hoffberg; Steven M.||Ergonomic man-machine interface incorporating adaptive pattern recognition based control system|
|US5877803||7 Apr 1997||2 Mar 1999||Tritech Mircoelectronics International, Ltd.||3-D image detector|
|US5913727||13 Jun 1997||22 Jun 1999||Ahdoot; Ned||Interactive movement and contact simulation game|
|US5933125||27 Nov 1995||3 Aug 1999||Cae Electronics, Ltd.||Method and apparatus for reducing instability in the display of a virtual environment|
|US5980256||13 Feb 1996||9 Nov 1999||Carmein; David E. E.||Virtual reality system with enhanced sensory apparatus|
|US5989157||11 Jul 1997||23 Nov 1999||Walton; Charles A.||Exercising system with electronic inertial game playing|
|US5995649||22 Sep 1997||30 Nov 1999||Nec Corporation||Dual-input image processor for recognizing, isolating, and displaying specific objects from the input images|
|US6005548||14 Aug 1997||21 Dec 1999||Latypov; Nurakhmed Nurislamovich||Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods|
|US6009210||5 Mar 1997||28 Dec 1999||Digital Equipment Corporation||Hands-free interface to a virtual reality environment using head tracking|
|US6054991||29 Jul 1994||25 Apr 2000||Texas Instruments Incorporated||Method of modeling player position and movement in a virtual reality system|
|US6066075||29 Dec 1997||23 May 2000||Poulton; Craig K.||Direct feedback controller for user interaction|
|US6072494||15 Oct 1997||6 Jun 2000||Electric Planet, Inc.||Method and apparatus for real-time gesture recognition|
|US6073489||3 Mar 1998||13 Jun 2000||French; Barry J.||Testing and training system for assessing the ability of a player to complete a task|
|US6077201||12 Jun 1998||20 Jun 2000||Cheng; Chau-Yang||Exercise bicycle|
|US6098458||6 Nov 1995||8 Aug 2000||Impulse Technology, Ltd.||Testing and training system for assessing movement and agility skills without a confining field|
|US6100896||24 Mar 1997||8 Aug 2000||Mitsubishi Electric Information Technology Center America, Inc.||System for designing graphical multi-participant environments|
|US6101289||15 Oct 1997||8 Aug 2000||Electric Planet, Inc.||Method and apparatus for unencumbered capture of an object|
|US6128003||22 Dec 1997||3 Oct 2000||Hitachi, Ltd.||Hand gesture recognition system and method|
|US6130677||15 Oct 1997||10 Oct 2000||Electric Planet, Inc.||Interactive computer vision system|
|US6141463||3 Dec 1997||31 Oct 2000||Electric Planet Interactive||Method and system for estimating jointed-figure configurations|
|US6147678||9 Dec 1998||14 Nov 2000||Lucent Technologies Inc.||Video hand image-three-dimensional computer interface with multiple degrees of freedom|
|US6152856||8 May 1997||28 Nov 2000||Real Vision Corporation||Real time simulation using position sensing|
|US6159100||23 Apr 1998||12 Dec 2000||Smith; Michael D.||Virtual reality game|
|US6173066||21 May 1997||9 Jan 2001||Cybernet Systems Corporation||Pose determination and tracking by matching 3D objects to a 2D sensor|
|US6181343||23 Dec 1997||30 Jan 2001||Philips Electronics North America Corp.||System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs|
|US6188777||22 Jun 1998||13 Feb 2001||Interval Research Corporation||Method and apparatus for personnel detection and tracking|
|US6215890||25 Sep 1998||10 Apr 2001||Matsushita Electric Industrial Co., Ltd.||Hand gesture recognizing device|
|US6215898||15 Apr 1997||10 Apr 2001||Interval Research Corporation||Data processing system and method|
|US6226396||31 Jul 1998||1 May 2001||Nec Corporation||Object extraction method and system|
|US6229913||7 Jun 1995||8 May 2001||The Trustees Of Columbia University In The City Of New York||Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus|
|US6256033||10 Aug 1999||3 Jul 2001||Electric Planet||Method and apparatus for real-time gesture recognition|
|US6256400||28 Sep 1999||3 Jul 2001||Matsushita Electric Industrial Co., Ltd.||Method and device for segmenting hand gestures|
|US6283860||14 Aug 2000||4 Sep 2001||Philips Electronics North America Corp.||Method, system, and program for gesture based option selection|
|US6289112||25 Feb 1998||11 Sep 2001||International Business Machines Corporation||System and method for determining block direction in fingerprint images|
|US6299308||31 Mar 2000||9 Oct 2001||Cybernet Systems Corporation||Low-cost non-imaging eye tracker system for computer control|
|US6308565||15 Oct 1998||30 Oct 2001||Impulse Technology Ltd.||System and method for tracking and assessing movement skills in multidimensional space|
|US6316934||30 Jun 1999||13 Nov 2001||Netmor Ltd.||System for three dimensional positioning and tracking|
|US6363160||22 Jan 1999||26 Mar 2002||Intel Corporation||Interface using pattern recognition and tracking|
|US6384819||15 Oct 1998||7 May 2002||Electric Planet, Inc.||System and method for generating an animatable character|
|US6411744||15 Oct 1998||25 Jun 2002||Electric Planet, Inc.||Method and apparatus for performing a clean background subtraction|
|US6430997||5 Sep 2000||13 Aug 2002||Trazer Technologies, Inc.||System and method for tracking and assessing movement skills in multidimensional space|
|US6476834||28 May 1999||5 Nov 2002||International Business Machines Corporation||Dynamic creation of selectable items on surfaces|
|US6496598||1 Mar 2000||17 Dec 2002||Dynamic Digital Depth Research Pty. Ltd.||Image processing method and apparatus|
|US6503195||24 May 1999||7 Jan 2003||University Of North Carolina At Chapel Hill||Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction|
|US6539931||16 Apr 2001||1 Apr 2003||Koninklijke Philips Electronics N.V.||Ball throwing assistant|
|US6570555||30 Dec 1998||27 May 2003||Fuji Xerox Co., Ltd.||Method and apparatus for embodied conversational characters with multimodal input/output in an interface device|
|US6633294||9 Mar 2001||14 Oct 2003||Seth Rosenthal||Method and apparatus for using captured high density motion for animation|
|US6640202||25 May 2000||28 Oct 2003||International Business Machines Corporation||Elastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications|
|US6661918||3 Dec 1999||9 Dec 2003||Interval Research Corporation||Background estimation and segmentation based on range and color|
|US6681031||10 Aug 1999||20 Jan 2004||Cybernet Systems Corporation||Gesture-controlled interfaces for self-service machines and other applications|
|US6714665||3 Dec 1996||30 Mar 2004||Sarnoff Corporation||Fully automated iris recognition system utilizing wide and narrow fields of view|
|US6721444||17 Mar 2000||13 Apr 2004||Matsushita Electric Works, Ltd.||3-dimensional object recognition method and bin-picking system using the method|
|US6731799||1 Jun 2000||4 May 2004||University Of Washington||Object segmentation with background extraction and moving boundary techniques|
|US6738066||30 Jul 1999||18 May 2004||Electric Plant, Inc.||System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display|
|US6765726||17 Jul 2002||20 Jul 2004||Impluse Technology Ltd.||System and method for tracking and assessing movement skills in multidimensional space|
|US6788809 *||30 Jun 2000||7 Sep 2004||Intel Corporation||System and method for gesture recognition in three dimensions using stereo imaging and color vision|
|US6801637||22 Feb 2001||5 Oct 2004||Cybernet Systems Corporation||Optical body tracker|
|US6873723||30 Jun 1999||29 Mar 2005||Intel Corporation||Segmenting three-dimensional video images using stereo|
|US6876496||9 Jul 2004||5 Apr 2005||Impulse Technology Ltd.||System and method for tracking and assessing movement skills in multidimensional space|
|US6937742||28 Sep 2001||30 Aug 2005||Bellsouth Intellectual Property Corporation||Gesture activated home appliance|
|US6950534||16 Jan 2004||27 Sep 2005||Cybernet Systems Corporation||Gesture-controlled interfaces for self-service machines and other applications|
|US7003134||8 Mar 2000||21 Feb 2006||Vulcan Patents Llc||Three dimensional object pose estimation which employs dense depth information|
|US7007035||8 Jun 2001||28 Feb 2006||The Regents Of The University Of California||Parallel object-oriented decision tree system|
|US7036094||31 Mar 2000||25 Apr 2006||Cybernet Systems Corporation||Behavior recognition system|
|US7038855||5 Apr 2005||2 May 2006||Impulse Technology Ltd.||System and method for tracking and assessing movement skills in multidimensional space|
|US7039676||31 Oct 2000||2 May 2006||International Business Machines Corporation||Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session|
|US7042440||21 Jul 2003||9 May 2006||Pryor Timothy R||Man machine interfaces and applications|
|US7050606||1 Nov 2001||23 May 2006||Cybernet Systems Corporation||Tracking and gesture recognition system particularly suited to vehicular control applications|
|US7058204||26 Sep 2001||6 Jun 2006||Gesturetek, Inc.||Multiple camera control system|
|US7060957||26 Mar 2001||13 Jun 2006||Csem Centre Suisse D'electronique Et Microtechinique Sa||Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves|
|US7113918||1 Aug 1999||26 Sep 2006||Electric Planet, Inc.||Method for video enabled electronic commerce|
|US7121946||29 Jun 2001||17 Oct 2006||Cybernet Systems Corporation||Real-time head tracking system for computer games and other applications|
|US7170492||18 Mar 2005||30 Jan 2007||Reactrix Systems, Inc.||Interactive video display system|
|US7184048||18 Oct 2001||27 Feb 2007||Electric Planet, Inc.||System and method for generating an animatable character|
|US7202898||16 Dec 1998||10 Apr 2007||3Dv Systems Ltd.||Self gating photosurface|
|US7222078||10 Dec 2003||22 May 2007||Ferrara Ethereal Llc||Methods and systems for gathering information from units of a commodity across a network|
|US7227526||23 Jul 2001||5 Jun 2007||Gesturetek, Inc.||Video-based image control system|
|US7257237||4 Mar 2004||14 Aug 2007||Sandia Corporation||Real time markerless motion tracking using linked kinematic chains|
|US7259747||28 May 2002||21 Aug 2007||Reactrix Systems, Inc.||Interactive video display system|
|US7289645||27 Oct 2003||30 Oct 2007||Mitsubishi Fuso Truck And Bus Corporation||Hand pattern switch device|
|US7308112||12 May 2005||11 Dec 2007||Honda Motor Co., Ltd.||Sign based human-machine interaction|
|US7317836||17 Mar 2006||8 Jan 2008||Honda Motor Co., Ltd.||Pose estimation based on critical point analysis|
|US7348963||5 Aug 2005||25 Mar 2008||Reactrix Systems, Inc.||Interactive video display system|
|US7359121||1 May 2006||15 Apr 2008||Impulse Technology Ltd.||System and method for tracking and assessing movement skills in multidimensional space|
|US7367887||7 Jul 2003||6 May 2008||Namco Bandai Games Inc.||Game apparatus, storage medium, and computer program that adjust level of game difficulty|
|US7372977||28 May 2004||13 May 2008||Honda Motor Co., Ltd.||Visual tracking using depth data|
|US7379563||15 Apr 2005||27 May 2008||Gesturetek, Inc.||Tracking bimanual movements|
|US7379566||6 Jan 2006||27 May 2008||Gesturetek, Inc.||Optical flow based tilt sensor|
|US7389591||17 May 2006||24 Jun 2008||Gesturetek, Inc.||Orientation-sensitive signal output|
|US7412077||29 Dec 2006||12 Aug 2008||Motorola, Inc.||Apparatus and methods for head pose estimation and head gesture detection|
|US7421093||19 Dec 2005||2 Sep 2008||Gesturetek, Inc.||Multiple camera control system|
|US7430312||9 Jan 2006||30 Sep 2008||Gesturetek, Inc.||Creating 3D images of objects by illuminating with infrared patterns|
|US7436496||28 Jan 2004||14 Oct 2008||National University Corporation Shizuoka University||Distance image sensor|
|US7450736||26 Oct 2006||11 Nov 2008||Honda Motor Co., Ltd.||Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers|
|US7452275||25 Jun 2002||18 Nov 2008||Konami Digital Entertainment Co., Ltd.||Game device, game controlling method and program|
|US7460690||14 Sep 2005||2 Dec 2008||Cybernet Systems Corporation||Gesture-controlled interfaces for self-service machines and other applications|
|US7489812||7 Jun 2002||10 Feb 2009||Dynamic Digital Depth Research Pty Ltd.||Conversion and encoding techniques|
|US7536032||25 Oct 2004||19 May 2009||Reactrix Systems, Inc.||Method and system for processing captured image information in an interactive video display system|
|US7555142||31 Oct 2007||30 Jun 2009||Gesturetek, Inc.||Multiple camera control system|
|US7560701||11 Aug 2006||14 Jul 2009||Mesa Imaging Ag||Highly sensitive, fast pixel for use in an image sensor|
|US7570805||23 Apr 2008||4 Aug 2009||Gesturetek, Inc.||Creating 3D images of objects by illuminating with infrared patterns|
|US7574020||7 Apr 2008||11 Aug 2009||Gesturetek, Inc.||Detecting and tracking objects in images|
|US7574411||29 Apr 2004||11 Aug 2009||Nokia Corporation||Low memory decision tree|
|US7576727||15 Dec 2003||18 Aug 2009||Matthew Bell||Interactive directed light/sound system|
|US7590262||21 Apr 2008||15 Sep 2009||Honda Motor Co., Ltd.||Visual tracking using depth data|
|US7593552||22 Mar 2004||22 Sep 2009||Honda Motor Co., Ltd.||Gesture recognition apparatus, gesture recognition method, and gesture recognition program|
|US7598942||8 Feb 2006||6 Oct 2009||Oblong Industries, Inc.||System and method for gesture based control system|
|US7607509||13 Jan 2003||27 Oct 2009||Iee International Electronics & Engineering S.A.||Safety device for a vehicle|
|US7620202||14 Jun 2004||17 Nov 2009||Honda Motor Co., Ltd.||Target orientation estimation using depth sensing|
|US7668340||2 Dec 2008||23 Feb 2010||Cybernet Systems Corporation||Gesture-controlled interfaces for self-service machines and other applications|
|US7680298||8 Oct 2008||16 Mar 2010||At&T Intellectual Property I, L. P.||Methods, systems, and products for gesture-activated appliances|
|US7683954||27 Sep 2007||23 Mar 2010||Brainvision Inc.||Solid-state image sensor|
|US7684592||14 Jan 2008||23 Mar 2010||Cybernet Systems Corporation||Realtime object tracking system|
|US7701439||13 Jul 2006||20 Apr 2010||Northrop Grumman Corporation||Gesture recognition simulation system and method|
|US7702130||29 Sep 2005||20 Apr 2010||Electronics And Telecommunications Research Institute||User interface apparatus using hand gesture recognition and method thereof|
|US7704135||23 Aug 2005||27 Apr 2010||Harrison Jr Shelton E||Integrated game system, method, and device|
|US7710391||20 Sep 2004||4 May 2010||Matthew Bell||Processing an image utilizing a spatially varying pattern|
|US7729530||3 Mar 2007||1 Jun 2010||Sergey Antonov||Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system|
|US7746345||27 Feb 2007||29 Jun 2010||Hunter Kevin L||System and method for generating an animatable character|
|US7760182||21 Aug 2006||20 Jul 2010||Subutai Ahmad||Method for video enabled electronic commerce|
|US7809167||19 May 2009||5 Oct 2010||Matthew Bell||Method and system for processing captured image information in an interactive video display system|
|US7834846||21 Aug 2006||16 Nov 2010||Matthew Bell||Interactive video display system|
|US7852262||15 Aug 2008||14 Dec 2010||Cybernet Systems Corporation||Wireless mobile indoor/outdoor tracking system|
|US7898522||1 Jun 2007||1 Mar 2011||Gesturetek, Inc.||Video-based image control system|
|US7974443 *||23 Nov 2010||5 Jul 2011||Microsoft Corporation||Visual target tracking using model fitting and exemplar|
|US8035612||20 Sep 2004||11 Oct 2011||Intellectual Ventures Holding 67 Llc||Self-contained interactive video display system|
|US8035614||30 Oct 2007||11 Oct 2011||Intellectual Ventures Holding 67 Llc||Interactive video window|
|US8035624||30 Oct 2007||11 Oct 2011||Intellectual Ventures Holding 67 Llc||Computer vision based touch screen|
|US8072470||29 May 2003||6 Dec 2011||Sony Computer Entertainment Inc.||System and method for providing a real-time three-dimensional interactive environment|
|US20020041327||23 Jul 2001||11 Apr 2002||Evan Hildreth||Video-based image control system|
|US20030085887||8 Feb 2002||8 May 2003||Smartequip, Inc.||Method and system for identifying objects using a shape-fitting algorithm|
|US20080019589||17 Jul 2007||24 Jan 2008||Ho Sub Yoon||Method and apparatus for recognizing gesture in image processing system|
|US20080026838||21 Aug 2006||31 Jan 2008||Dunstan James E||Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games|
|US20080201340||27 Dec 2007||21 Aug 2008||Infosys Technologies Ltd.||Decision tree construction via frequent predictive itemsets and best attribute splits|
|US20090110292||26 Oct 2007||30 Apr 2009||Honda Motor Co., Ltd.||Hand Sign Recognition Using Label Assignment|
|US20100094800||9 Oct 2008||15 Apr 2010||Microsoft Corporation||Evaluating Decision Trees on a GPU|
|US20100197392||7 Dec 2009||5 Aug 2010||Microsoft Corporation||Visual target tracking|
|US20100214322 *||16 Feb 2010||26 Aug 2010||Samsung Electronics Co., Ltd.||Method for controlling display and device using the same|
|US20100215257||19 Feb 2010||26 Aug 2010||Honda Motor Co., Ltd.||Capturing and recognizing hand postures using inner distance shape contexts|
|US20120092445 *||14 Oct 2010||19 Apr 2012||Microsoft Corporation||Automatically tracking user movement in a video chat application|
|US20120154373||15 Dec 2010||21 Jun 2012||Microsoft Corporation||Parallel processing machine learning decision tree training|
|US20120163723||28 Dec 2010||28 Jun 2012||Microsoft Corporation||Classification of posture states|
|USRE42256||9 Jan 2009||29 Mar 2011||Elet Systems L.L.C.||Method and apparatus for performing a clean background subtraction|
|CN201254344Y||20 Aug 2008||10 Jun 2009||中国农业科学院草原研究所||Plant specimens and seed storage|
|EP0583061A2||5 Jul 1993||16 Feb 1994||The Walt Disney Company||Method and apparatus for providing enhanced graphics in a virtual world|
|JP08044490A1||Title not available|
|WO2093/10708A1||Title not available|
|WO2097/17598A1||Title not available|
|WO2099/44698A1||Title not available|
|1||"Human motion-capture for Xbox Kinect", Retrieved at << http://research.microsft.com/en-us/projects/vrkinect/ >>, Retrieved Date: Apr. 15, 2011, 3 Pages.|
|2||"Human motion-capture for Xbox Kinect", Retrieved at >, Retrieved Date: Apr. 15, 2011, 3 Pages.|
|3||"Simulation and Training", 1994, Division Incorporated.|
|4||"Virtual High Anxiety", Tech Update, Aug. 1995, pp. 22.|
|5||Aggarwal et al., "Human Motion Analysis: A Review", IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX.|
|6||Athitsos, et al., "An Appearance-Based Framework for 3D Hand Shape Classification and Camera Viewpoint Estimation", Retrieved at << http://luthuli.cs.uiuc.edu/˜daf/courses/AppCV/Papers/01004129.pdf >>, In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 6.|
|7||Athitsos, et al., "An Appearance-Based Framework for 3D Hand Shape Classification and Camera Viewpoint Estimation", Retrieved at >, In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 6.|
|8||Azarbayejani et al., "Visually Controlled Graphics", Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence.|
|9||Bolan, A. et al., "Attribute State Classification," U.S. Appl. No. 13/098,899, fled May 2, 2011, 38 pages.|
|10||Breen et al., "Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality", Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany.|
|11||Brogan et al., "Dynamically Simulated Characters in Virtual Environments", Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications.|
|12||Cohen, I. et al., "Inference of Human Postures by Classification of 3d Human Body Shape", IEEE Workshop on Analysis and Modeling of Faces and Gestures, Mar. 2003, 8 pages.|
|13||Finocchio, M. et al., "Parallel Processing Machine Learning Decision Tree Training", U.S. Appl. No. 12/969,112, filed Dec. 15, 2010, 33 pages.|
|14||Fisher et al., "Virtual Environment Display System", ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC.|
|15||Freeman et al., "Television Control by Hand Gestures", Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA.|
|16||Granieri et al., "Simulating Humans in VR", The British Computer Society, Oct. 1994, Academic Press.|
|17||Hasegawa et al., "Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator", Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY.|
|18||He, "Generation of Human Body Models", Apr. 2005, University of Auckland, New Zealand.|
|19||Hongo et al., "Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras", Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France.|
|20||Isard et al., "Condensation-Conditional Density Propagation for Visual Tracking", 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands.|
|21||Isard et al., "Condensation—Conditional Density Propagation for Visual Tracking", 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands.|
|22||Jungling, et al.,"Feature Based Person Detection Beyond the Visible Spectrum", IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Jun. 20-25, 2009, pp. 30-37.|
|23||Kanade et al., "A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications", IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.|
|24||Khan, et al., "Real-time Human Motion Detection and Classification", IEEE Proceedings Students Conference, Aug. 16-17, 2002, pp. 135-139.|
|25||Kohler, "Special Topics of Gesture Recognition Applied in Intelligent Home Environments", In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany.|
|26||Kohler, "Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments", 1997, Germany.|
|27||Kohler, "Vision Based Remote Control in Intelligent Home Environments", University of Erlangen-Nuremberg/ Germany, 1996, pp. 147-154, Germany.|
|28||Li, et al., "Real time Hand Gesture Recognition using a Range Camera", Retrieved at << http://www.araa.asn.au/acra/acra2009/papers/pap128s1.pdf >>, Australasian Conference on Robotics and Automation (ACRA), Dec. 2-4, 2009, pp. 7.|
|29||Li, et al., "Real time Hand Gesture Recognition using a Range Camera", Retrieved at >, Australasian Conference on Robotics and Automation (ACRA), Dec. 2-4, 2009, pp. 7.|
|30||Livingston, "Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality", 1998, University of North Carolina at Chapel Hill, North Carolina, USA.|
|31||Miyagawa et al., "CCD-Based Range Finding Sensor", Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices.|
|32||Pavlovic et al., "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review", Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence.|
|33||Plagemann, et al., "Real-time Identification and Localization of Body Parts from Depth Images", Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=arnumber=5509559 >>, IEEE International Conference on Robotics and Automation (ICRA), May 3-7, 2010, 6 Pages.|
|34||Plagemann, et al., "Real-time Identification and Localization of Body Parts from Depth Images", Retrieved at >, IEEE International Conference on Robotics and Automation (ICRA), May 3-7, 2010, 6 Pages.|
|35||Qian et al., "A Gesture-Driven Multimodal Interactive Dance System", Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan.|
|36||Rosenhahn et al., "Automatic Human Model Generation", 2005, pp. 41-48, University of Auckland (CITR), New Zealand.|
|37||Shao et al., "An Open System Architecture for a Multimedia and Multimodal User Interface", Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan.|
|38||Sheridan et al., "Virtual Reality Check", Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7.|
|39||Stevens, "Flights into Virtual Reality Treating Real World Disorders", The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages.|
|40||Wren et al., "Pfinder: Real-Time Tracking of the Human Body", MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA.|
|41||Zhao, "Dressed Human Modeling, Detection, and Parts Localization", 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US9011293 *||26 Jan 2012||21 Apr 2015||Flow-Motion Research And Development Ltd.||Method and system for monitoring and feed-backing on execution of physical exercise routines|
|US9350951 *||20 Nov 2012||24 May 2016||Scott Dallas Rowe||Method for interactive training and analysis|
|US9436871 *||30 Dec 2013||6 Sep 2016||Shenyang Neusoft Medical Systems Co., Ltd||Posture detection method and system|
|US9588582||20 Aug 2014||7 Mar 2017||Medibotics Llc||Motion recognition clothing (TM) with two different sets of tubes spanning a body joint|
|US9628843 *||21 Nov 2011||18 Apr 2017||Microsoft Technology Licensing, Llc||Methods for controlling electronic devices using gestures|
|US9773155||14 Oct 2014||26 Sep 2017||Microsoft Technology Licensing, Llc||Depth from time of flight camera|
|US20120190505 *||26 Jan 2012||26 Jul 2012||Flow-Motion Research And Development Ltd||Method and system for monitoring and feed-backing on execution of physical exercise routines|
|US20150092998 *||30 Dec 2013||2 Apr 2015||Shenyang Neusoft Medical Systems Co., Ltd.||Posture detection method and system|
|US20160256740 *||19 May 2016||8 Sep 2016||Scott Dallas Rowe||Method for interactive training and analysis|
|US20160267801 *||24 Oct 2013||15 Sep 2016||Huawei Device Co., Ltd.||Image display method and apparatus|
|U.S. Classification||382/224, 382/203, 382/100, 348/77|
|International Classification||G06K9/46, G06K9/00, H04N9/47|
|Cooperative Classification||G06K9/00375, G06F3/017, G06K9/00342, G06K9/00369, G06F3/011|
|17 Feb 2011||AS||Assignment|
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BALAN, ALEXANDRU;SIDDIQUI, MATHEEN;GEISS, RYAN M.;AND OTHERS;SIGNING DATES FROM 20110207 TO 20110214;REEL/FRAME:025820/0938
|9 Dec 2014||AS||Assignment|
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001
Effective date: 20141014
|5 Jan 2017||FPAY||Fee payment|
Year of fee payment: 4