Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS8588517 B2
Publication typeGrant
Application numberUS 13/742,094
Publication date19 Nov 2013
Filing date15 Jan 2013
Priority date18 Dec 2009
Fee statusPaid
Also published asUS8374423, US20110150271, US20120177254, US20130129155
Publication number13742094, 742094, US 8588517 B2, US 8588517B2, US-B2-8588517, US8588517 B2, US8588517B2
InventorsJohnny Lee, Tommer Leyvand, Craig Peeper
Original AssigneeMicrosoft Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Motion detection using depth images
US 8588517 B2
Abstract
A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application.
Images(12)
Previous page
Next page
Claims(20)
What is claimed is:
1. A method for using depth images to sense motion, comprising:
creating a reference image;
receiving a new depth image;
creating a motion image based on the new depth image and the reference image, wherein the motion image represents identified forward motion;
identifying one or more objects in the motion image; and
updating the reference image based on the new depth image.
2. The method of claim 1, wherein:
at least one of the one or more objects comprises a body part; and
the identifying one or more objects in the motion image comprises identifying one or more body parts in the motion image.
3. The method of claim 2, further comprising:
accessing structure data that includes structural information that enables the one or more body parts to be recognized;
wherein the identifying one or more body parts in the motion image is performed based on the structure data; and
wherein the structure information includes a skeletal model of a human.
4. The method of claim 2, further comprising:
tracking motion of at least one of the one or more body parts in the motion image;
wherein the identifying one or more body parts in the motion image includes grouping pixels of the motion image to form one or more groups of pixels, and associating at least one of the one or more groups of pixels with at least one body part for which motion is being tracked.
5. The method of claim 1, further comprising:
accessing a gestures library that includes a plurality of gesture filters, wherein each of the gesture filters comprises information concerning a gesture that can be performed; and
comparing the motion image to the gesture filters to thereby identify when one or more of the gestures are performed.
6. The method of claim 5, further comprising:
controlling an application based the one or more gestures that are identified as being performed.
7. The method of claim 1, further comprising:
accessing a gestures library; and
identifying when one or more gestures are performed based on the motion image and the gestures library.
8. The method of claim 1, wherein:
the reference image comprises a moving average of a plurality of previous depth images and is in a same format as the previous depth images.
9. The method of claim 1, wherein:
the new depth image includes a two dimensional arrangement of pixels, where the pixels represent depth of the one or more objects.
10. The method of claim 1, wherein:
the reference image is created based on a mathematical function of multiple previous depth images of a first object moving in a scene;
the received new depth image also portrays the first object moving in the scene; and
the identifying one or more objects in the motion image includes identifying the first object in the scene.
11. An apparatus that uses depth images to sense motion, comprising:
a communication interface that receives depth images;
one or more storage devices that store depth images;
a display interface; and
one or more processors in communication with the one or more storage devices and the display interface, wherein the one or more processors
create a reference image that includes foreground data and background data based on a mathematical function of multiple previous depth images of an object moving in a scene;
access a new depth image of the object in the scene received from the communication interface;
create a motion image based on the new depth image and the reference image;
identify the object in the motion image; and
update the reference image based on the new depth image and the mathematical function.
12. The apparatus of claim 11, wherein:
the one or more processors use position information for the identified object to update an application running on the apparatus and provide signals on the display interface that indicate an update to the application.
13. The apparatus of claim 11, wherein:
each reference image and depth image include a same number of pixels; and
the mathematical function is used to operate on each pixel of the reference image.
14. The apparatus of claim 11, wherein:
the mathematic function, which is used to create and update the reference image, calculates a moving average of a most recent N previous depth images of the object moving in the scene.
15. The apparatus of claim 11, wherein:
the new depth image and the reference image are two dimensional arrangements of pixels of a common captured scene where the pixels represent depth of the object.
16. One or more processor readable storage devices having instructions encoded thereon which when executed cause one or more processors to perform a method for using depth images to sense motion, the method comprising:
creating a reference image including a number of pixels;
receiving a new depth image including a same number of pixels as in the reference image;
creating a motion image based on the new depth image and the reference image, wherein the motion image includes a same number of pixels and is in a same format as the reference image and the new depth image;
identifying one or more objects in the motion image; and
updating the reference image based on the new depth image.
17. The one or more processor readable storage devices of claim 16, wherein:
the new depth image and the reference image are two dimensional arrangements of pixels of a common captured scene where the pixels represent depth of the one or more objects.
18. The one or more processor readable storage devices of claim 16, wherein:
the creating the motion image includes subtracting the new depth image from the reference image to create a set of difference data, identifying difference data greater than a threshold as motion data that includes one of either forward motion data and backward motion data, and discarding the backward motion data.
19. The one or more processor readable storage devices of claim 16, wherein:
the identifying one or more objects in the motion image includes grouping pixels of the motion image to form one or more groups of pixels, associating each of the one or more groups of pixels with one or more objects identified in object history data, and updating the object history data.
20. The one or more processor readable storage devices of claim 19, wherein:
the identifying one or more objects in the motion image includes identifying a center of each of the one or more groups of pixels; and
the associating each of the one or more groups of pixels with one or more objects identified in object history data includes associating the center of each of the one or more groups of pixels with an object having a closest proximity in a most recent motion image.
Description
CLAIM OF PRIORITY

This application is a continuation application of U.S. application Ser. No. 13/410,546, “MOTION DETECTION USING DEPTH IMAGES,” filed on Mar. 2, 2012, which is a continuation application of U.S. application Ser. No. 12/641,788, “MOTION DETECTION USING DEPTH IMAGES,” filed on Dec. 18, 2009, both of which are incorporated herein by reference in its entirety.

BACKGROUND

Many computing applications such as computer games, multimedia applications, or the like use controls to allow users to manipulate game characters or other aspects of an application. Typically such controls are input using, for example, controllers, remotes, keyboards, mice, or the like. Unfortunately, such controls can be difficult to learn, thus creating a barrier between a user and such games and applications. Furthermore, such controls may be different than actual game actions or other application actions for which the controls are used. For example, a game control that causes a game character to swing a baseball bat may not correspond to an actual motion of swinging the baseball bat.

SUMMARY

Disclosed herein are systems and methods for tracking motion of a user or other objects in a scene using depth images. The tracked motion is then used to update an application. Therefore, a user can manipulate game characters or other aspects of the application by using movement of the user's body and/or objects around the user, rather than (or in addition to) using controllers, remotes, keyboards, mice, or the like.

A sensor system creates a sequence of depth images that are used to detect and track motion of objects within range of the sensor system. A reference image is created and updated based on a moving average (or other function) of a set of depth images. A new depth images is compared to the reference image to create a motion image, which is an image file (or other data structure) with data representing motion. The new depth image is also used to update the reference image. The data in the motion image is grouped and associated with one or more objects being tracked. The tracking of the objects is updated by the grouped data in the motion image. The new positions of the objects are used to update an application. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects. In one implementation, avatars can be moved based on movement of the user in front of a camera.

One embodiment includes creating a reference image that includes foreground data and background data based on multiple previous depth images, receiving a new depth image, creating a motion image based on the new depth image and the reference image, identifying one or more objects in the motion image, using position information for the identified one or more objects to update an application, and updating the reference image based on the new depth image.

One embodiment includes a communication interface that receives depth images, one or more storage devices that store depth images, a display interface, and one or more processors in communication with the one or more storage devices and the display interface. The one or more processors access a new depth image received from the communication interface and identify motion based comparing the new depth image to a reference image stored in the one or more storage devices. The one or more processors create a motion image representing identified motion. The one or more processors group pixels of the motion image and associate one or more groups of pixels with one or more objects identified in object history data stored in the one or more storage devices. The one or more processors use position information for the identified one or more objects to update an application running on the apparatus and provide signals on the display interface that indicate the update to the application.

One embodiment includes receiving a new depth image, identifying motion based on comparing the new depth image to a reference image, creating a motion image representing identified forward motion and discarding identified backward motion when creating the motion image, identifying one or more objects in the motion image, and reporting the identified one or more objects in the motion image.

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

FIGS. 1A and 1B illustrate an example embodiment of a tracking system with a user playing a game.

FIG. 2 illustrates an example embodiment of a capture device that may be used as part of the tracking system.

FIG. 3 illustrates an example embodiment of a computing system that may be used to track motion and update an application based on the tracked motion.

FIG. 4 illustrates another example embodiment of a computing system that may be used to track motion and update an application based on the tracked motion.

FIG. 5 is an example depth image.

FIG. 6 depicts the data in a depth image.

FIG. 7 is a flow chart describing one embodiment of a process for capturing a sequence depth images.

FIG. 8 is a flow chart describing one embodiment of a process for operating a computing system to track motion and update an application based on that tracked motion.

FIG. 9 is a flow chart describing one embodiment of a process for creating a motion image based on a depth image and a reference image.

FIG. 10 is an example of a motion image.

FIG. 11 is a flow chart describing one embodiment of a process for grouping pixels of a motion image.

FIG. 12 is a flow chart describing one embodiment of a process for associating groups of pixels in a motion image with objects being tracked.

FIG. 13 is a flow chart describing another embodiment of a process for associating groups of pixels in a motion image with objects being tracked.

DETAILED DESCRIPTION

Depth images are captured by a sensor and used by a computing system to track motions of a user and/or other objects. The tracked motion is then used to update an application. Therefore, a user can manipulate game characters or other aspects of the application by using movement of the user's body and/or objects around the user, rather than (or in addition to) using controllers, remotes, keyboards, mice, or the like. For example, a video game system will update the position of images displayed in the video based on the new positions of the objects or update an avatar based on motion of the user.

FIGS. 1A and 1B illustrate an example embodiment of a tracking system 10 with a user 18 playing a boxing video game. In an example embodiment, the tracking system 10 may be used to recognize, analyze, and/or track a human target such as the user 18 or other objects within range of tracking system 10.

As shown in FIG. 1A, tracking system 10 may include a computing system 12. The computing system 12 may be a computer, a gaming system or console, or the like. According to an example embodiment, the computing system 12 may include hardware components and/or software components such that computing system 12 may be used to execute applications such as gaming applications, non-gaming applications, or the like. In one embodiment, computing system 12 may include a processor such as a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions stored on a processor readable storage device for performing the processes described herein.

As shown in FIG. 1A, tracking system 10 may further include a capture device 20. The capture device 20 may be, for example, a camera that may be used to visually monitor one or more users, such as the user 18, such that gestures and/or movements performed by the one or more users may be captured, analyzed, and tracked to perform one or more controls or actions within the application and/or animate an avatar or on-screen character, as will be described in more detail below.

According to one embodiment, the tracking system 10 may be connected to an audiovisual device 16 such as a television, a monitor, a high-definition television (HDTV), or the like that may provide game or application visuals and/or audio to a user such as the user 18. For example, the computing system 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the game application, non-game application, or the like. The audiovisual device 16 may receive the audiovisual signals from the computing system 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18. According to one embodiment, the audiovisual device 16 may be connected to the computing system 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, component video cable, or the like.

As shown in FIGS. 1A and 1B, the tracking system 10 may be used to recognize, analyze, and/or track a human target such as the user 18. For example, the user 18 may be tracked using the capture device 20 such that the gestures and/or movements of user 18 may be captured to animate an avatar or on-screen character and/or may be interpreted as controls that may be used to affect the application being executed by computer environment 12. Thus, according to one embodiment, the user 18 may move his or her body to control the application and/or animate the avatar or on-screen character.

In the example depicted in FIGS. 1A and 1B, the application executing on the computing system 12 may be a boxing game that the user 18 is playing. For example, the computing system 12 may use the audiovisual device 16 to provide a visual representation of a boxing opponent 38 to the user 18. The computing system 12 may also use the audiovisual device 16 to provide a visual representation of a player avatar 40 that the user 18 may control with his or her movements. For example, as shown in FIG. 1B, the user 18 may throw a punch in physical space to cause the player avatar 40 to throw a punch in game space. Thus, according to an example embodiment, the computer system 12 and the capture device 20 recognize and analyze the punch of the user 18 in physical space such that the punch may be interpreted as a game control of the player avatar 40 in game space and/or the motion of the punch may be used to animate the player avatar 40 in game space.

Other movements by the user 18 may also be interpreted as other controls or actions and/or used to animate the player avatar, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches. Furthermore, some movements may be interpreted as controls that may correspond to actions other than controlling the player avatar 40. For example, in one embodiment, the player may use movements to end, pause, or save a game, select a level, view high scores, communicate with a friend, etc. According to another embodiment, the player may use movements to select the game or other application from a main user interface. Thus, in example embodiments, a full range of motion of the user 18 may be available, used, and analyzed in any suitable manner to interact with an application.

In example embodiments, the human target such as the user 18 may have an object. In such embodiments, the user of an electronic game may be holding the object such that the motions of the player and the object may be used to adjust and/or control parameters of the game. For example, the motion of a player holding a racket may be tracked and utilized for controlling an on-screen racket in an electronic sports game. In another example embodiment, the motion of a player holding an object may be tracked and utilized for controlling an on-screen weapon in an electronic combat game. Objects not held by the user can also be tracked, such as objects thrown, pushed or rolled by the user (or a different user) as well as self propelled objects. In addition to boxing, other games can also be implemented.

According to other example embodiments, the tracking system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the target such as the user 18.

FIG. 2 illustrates an example embodiment of the capture device 20 that may be used in the tracking system 10. According to an example embodiment, the capture device 20 may be configured to capture video with depth information including a depth image that may include depth values via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like. According to one embodiment, the capture device 20 may organize the depth information into “Z layers,” or layers that may be perpendicular to a Z axis extending from the depth camera along its line of sight.

As shown in FIG. 2, the capture device 20 may include an image camera component 22. According to an example embodiment, the image camera component 22 may be a depth camera that may capture a depth image of a scene. The depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value such as a distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from the camera.

As shown in FIG. 2, according to an example embodiment, the image camera component 22 may include an infra-red (IR) light component 24, a three-dimensional (3-D) camera 26, and an RGB camera 28 that may be used to capture the depth image of a scene. For example, in time-of-flight analysis, the IR light component 24 of the capture device 20 may emit an infrared light onto the scene and may then use sensors (not shown) to detect the backscattered light from the surface of one or more targets and objects in the scene using, for example, the 3-D camera 26 and/or the RGB camera 28. In some embodiments, pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the scene. Additionally, in other example embodiments, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location on the targets or objects.

According to another example embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.

In another example embodiment, the capture device 20 may use a structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern, a stripe pattern, or different pattern) may be projected onto the scene via, for example, the IR light component 24. Upon striking the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 26 and/or the RGB camera 28 and may then be analyzed to determine a physical distance from the capture device to a particular location on the targets or objects. In some implementations, the IR Light component 24 is displaced from the cameras 24 and 26 so triangulation can be used to determined distance from cameras 24 and 26. In some implementations, the capture device 20 will include a dedicated IR sensor to sense the IR light.

According to another embodiment, the capture device 20 may include two or more physically separated cameras that may view a scene from different angles to obtain visual stereo data that may be resolved to generate depth information. Other types of depth image sensors can also be used to create a depth image.

The capture device 20 may further include a microphone 30. The microphone 30 may include a transducer or sensor that may receive and convert sound into an electrical signal. According to one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing system 12 in the target recognition, analysis, and tracking system 10. Additionally, the microphone 30 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing system 12.

In an example embodiment, the capture device 20 may further include a processor 32 that may be in operative communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions including, for example, instructions for receiving a depth image, generating the appropriate data format (e.g., frame) and transmitting the data to computing system 12.

The capture device 20 may further include a memory component 34 that may store the instructions that may be executed by the processor 32, images or frames of images captured by the 3-D camera and/or RGB camera, or any other suitable information, images, or the like. According to an example embodiment, the memory component 34 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. As shown in FIG. 2, in one embodiment, the memory component 34 may be a separate component in communication with the image capture component 22 and the processor 32. According to another embodiment, the memory component 34 may be integrated into the processor 32 and/or the image capture component 22.

As shown in FIG. 2, the capture device 20 may be in communication with the computing system 12 via a communication link 36. The communication link 36 may be a wired connection including, for example, a USB connection, a Firewire connection, an Ethernet cable connection, or the like and/or a wireless connection such as a wireless 802.11b, g, a, or n connection. According to one embodiment, the computing system 12 may provide a clock to the capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 36. Additionally, the capture device 20 provides the depth information and visual (e.g., RGB) images captured by, for example, the 3-D camera 26 and/or the RGB camera 28 to the computing system 12 via the communication link 36. In one embodiment, the depth images and visual images are transmitted at 30 frames per second. The computing system 12 may then use the model, depth information, and captured images to, for example, control an application such as a game or word processor and/or animate an avatar or on-screen character.

Computing system 12 includes gestures library 190, structure data 192, depth image processing and object reporting module 194 and application 196. Depth image processing and object reporting module 194 uses the depth images to track motion of objects, such as the user and other objects. To assist in the tracking of the objects, depth image processing and object reporting module 194 uses gestures library 190 and structure data 192.

Structure data 192 includes structural information about objects that may be tracked. For example, a skeletal model of a human may be stored to help understand movements of the user and recognize body parts. Structural information about inanimate objects may also be stored to help recognize those objects and help understand movement.

Gestures library 190 may include a collection of gesture filters, each comprising information concerning a gesture that may be performed by the skeletal model (as the user moves). The data captured by the cameras 26, 28 and the capture device 20 in the form of the skeletal model and movements associated with it may be compared to the gesture filters in the gesture library 190 to identify when a user (as represented by the skeletal model) has performed one or more gestures. Those gestures may be associated with various controls of an application. Thus, the computing system 12 may use the gestures library 190 to interpret movements of the skeletal model and to control application 196 based on the movements. As such, gestures library may be used by depth image processing and object reporting module 194 and application 196.

Application 196 can be a video game, productivity application, etc. In one embodiment, depth image processing and object reporting module 194 will report to application 196 an identification of each object detected and the location of the object for each frame. Application 196 will use that information to update the position or movement of an avatar or other images in the display.

FIG. 3 illustrates an example embodiment of a computing system that may be the computing system 12 shown in FIGS. 1A-2 used to track motion and/or animate (or otherwise update) an avatar or other on-screen object displayed by an application. The computing system such as the computing system 12 described above with respect to FIGS. 1A-2 may be a multimedia console 100, such as a gaming console. As shown in FIG. 3, the multimedia console 100 has a central processing unit (CPU) 101 having a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106. The level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput. The CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104. The flash ROM 106 may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered ON.

A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).

The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.

System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, Blu-Ray drive, hard disk drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).

The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.

The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.

The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.

When the multimedia console 100 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.

The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.

When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.

In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.

With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.

After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.

When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager (described below) controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.

Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. The cameras 26, 28 and capture device 20 may define additional input devices for the console 100 via USB controller 126 or other interface.

FIG. 4 illustrates another example embodiment of a computing system 220 that may be the computing system 12 shown in FIGS. 1A-2 used to track motion and/or animate (or otherwise update) an avatar or other on-screen object displayed by an application. The computing system environment 220 is only one example of a suitable computing system and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing system 220 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating system 220. In some embodiments the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure. For example, the term circuitry used in the disclosure can include specialized hardware components configured to perform function(s) by firmware or switches. In other examples embodiments the term circuitry can include a general purpose processing unit, memory, etc., configured by software instructions that embody logic operable to perform function(s). In example embodiments where circuitry includes a combination of hardware and software, an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit. Since one skilled in the art can appreciate that the state of the art has evolved to a point where there is little difference between hardware, software, or a combination of hardware/software, the selection of hardware versus software to effectuate specific functions is a design choice left to an implementer. More specifically, one of skill in the art can appreciate that a software process can be transformed into an equivalent hardware structure, and a hardware structure can itself be transformed into an equivalent software process. Thus, the selection of a hardware implementation versus a software implementation is one of design choice and left to the implementer.

Computing system 220 comprises a computer 241, which typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 222 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 223 and random access memory (RAM) 260. A basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223. RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259. By way of example, and not limitation, FIG. 4 illustrates operating system 225, application programs 226, other program modules 227, and program data 228.

The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 4 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 238 is typically connected to the system bus 221 through an non-removable memory interface such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235.

The drives and their associated computer storage media discussed above and illustrated in FIG. 4, provide storage of computer readable instructions, data structures, program modules and other data for the computer 241. In FIG. 4, for example, hard disk drive 238 is illustrated as storing operating system 258, application programs 257, other program modules 256, and program data 255. Note that these components can either be the same as or different from operating system 225, application programs 226, other program modules 227, and program data 228. Operating system 258, application programs 257, other program modules 256, and program data 255 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). The cameras 26, 28 and capture device 20 may define additional input devices for the console 100 that connect via user input interface 236. A monitor 242 or other type of display device is also connected to the system bus 221 via an interface, such as a video interface 232. In addition to the monitor, computers may also include other peripheral output devices such as speakers 244 and printer 243, which may be connected through a output peripheral interface 233. Capture Device 20 may connect to computing system 220 via output peripheral interface 233, network interface 237, or other interface.

The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in FIG. 4. The logical connections depicted include a local area network (LAN) 245 and a wide area network (WAN) 249, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.

When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 4 illustrates application programs 248 as residing on memory device 247. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.

As explained above, capture device 20 provides RGB images and depth images to computing system 12. The depth image may be a plurality of observed pixels where each observed pixel has an observed depth value. For example, the depth image may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may have a depth value such as a length or distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from the capture device.

FIG. 5 illustrates an example embodiment of a depth image that may be received at computing system 12 from capture device 20. According to an example embodiment, the depth image may be an image and/or frame of a scene captured by, for example, the 3-D camera 26 and/or the RGB camera 28 of the capture device 20 described above with respect to FIG. 2. As shown in FIG. 5, the depth image may include a human target corresponding to, for example, a user such as the user 18 described above with respect to FIGS. 1A and 1B and one or more non-human targets such as a wall, a table, a monitor, or the like in the captured scene. As described above, the depth image may include a plurality of observed pixels where each observed pixel has an observed depth value associated therewith. For example, the depth image 400 may include a two-dimensional (2-D) pixel area of the captured scene where each pixel at particular X-value and Y-value in the 2-D pixel area may have a depth value such as a length or distance in, for example, centimeters, millimeters, or the like of a target or object in the captured scene from the capture device.

In one embodiment, the depth image may be colorized or grayscale such that different colors or shades of the pixels of the depth image correspond to and/or visually depict different distances of the targets 404 from the capture device 20. Upon receiving the image, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth image; portions of missing and/or removed depth information may be filled in and/or reconstructed; and/or any other suitable processing may be performed on the received depth image.

FIG. 6 provides another view/representation of a depth image (not corresponding to the same example as FIG. 5). The view of FIG. 6 shows the depth data for each pixel as an integer that represents the distance of the target to capture device 20 for that pixel. The example depth image of FIG. 6 shows 2424 pixels; however, it is likely that a depth image of greater resolution would be used.

FIG. 7 is a flow chart describing one embodiment of a process for operating capture device 20. In step 402, a depth image and a visual image are captured by any of the sensors in capture device 20 described herein, or other suitable sensors known in the art. In one embodiment, the depth image is captured separately from the visual image. In some implementations, the depth image and visual image are captured at the same time, while in other implementations they are captured sequentially or at different times. In other embodiments, the depth image is captured with the visual image or combined with the visual image as one image file so that each pixel has an R value, a G value, a B value and a Z value (distance). In step 404, the depth image and the visual image are transmitted to computing system 12. In one embodiment, the depth image and visual image are transmitted at 30 frames per second. In some examples, the depth image is transmitted separately from the visual image. In other embodiments, the depth image and visual image can be transmitted together.

FIG. 8 is a flowchart describing one embodiment of a process for operating computing system 12 to use depth images to track and identify objects (users and other objects) in order to update an application based on the objects identified and tracked. The process of FIG. 8 is performed in response to capturing device 20 transmitting a depth image and visual image to computing system 12 (step 404 of FIG. 7); therefore, the process of FIG. 8 is performed many times. In one embodiment, the process of FIG. 8 is performed 30 times a second. In other embodiments, the process of FIG. 8 can be performed more or less than 30 times per second based on or independent from the frame rate of the depth images. In step 460 of FIG. 8, computing system 12 will receive a new depth image from capturing device 20 in response to step 404 of FIG. 7. This depth image is provided to the depth image processing and object reporting module 194.

In step 462 of FIG. 8, a motion image is created based on the newly received depth image and a reference image. The reference image is typically in the same format as the depth image. In one embodiment, the reference image as a single static image of a scene taken prior to the motion. In another embodiment, the reference image is a moving average of the most recent N frames of depth images which provides an estimate of the typical depth position for every pixel in the depth image sequence. If the reference image is a moving average, then the reference image will update over time. The average can either be a uniform average across the sequence length, an exponential decay average, or other weighted average using an external weighting system. Comparing the current depth image to the moving average allows the system to adapt to gradual changes in the scene, as well as automatically adapt to a moving capture device without a reinitialization step. In one embodiment, the reference image is based on 4 seconds of depth images; therefore, if the frame rate is 30 frames per second then the reference image is based on the most recent 120 depth images.

The reference image is updated based on each new depth image that is received. In one example, it will take 120 frames before the reference image is established. In other embodiments, the reference image will be established with the first depth image and then each additional depth image will be added to the reference image until there are 120 depth images received, then the reference image will be updated by the most recent 120 images.

The Equation (1) below provides one example of a formula for creating a reference image:

new average = t * ( old average ) + new data t + 1 Equation ( 1 )

Equation (1) is used to operate on each pixel of the reference image. The variable “t” is the number of frames included in the reference image. In one example, 120 frames are included (4 seconds of video at 30 frames per second). The variable “old average” is the pixel value in the reference image for the particular pixel under consideration. The variable “new data” is the corresponding pixel value in the new depth image received. The output “new average” is the new pixel value of the updated reference image. Equation 1 is performed for every pixel of the reference image. In this manner, the reference image is re-created each time it is updated.

In one alternative, the value of t in Equation (1) can be different if the motion is backward versus forward. For a particular pixel, the value of t can be 120 if the motion is forward and the value of t can be 30 if the motion is backward.

When using Equation (1), the system does not need to keep a buffer of the previous 120 frames of depth images. Only the current new depth image needs to be stored in a buffer as well as the reference image. If the system used a straight averaging process, then a buffer would need to keep the last 120 frames of depth images.

Step 462 includes creating a motion image based on the new depth image received in step 460 and the reference image discussed above. As explained above, in one embodiment, the depth image and the reference image have the same number of pixels. The motion image created in step 462 is also a file (or other data structure) with the same number of pixels in the same format as the depth image and reference image. In one embodiment, the motion image is created by subtracting the new depth image from the reference image on a pixel by pixel basis. Thus, a corresponding pixel in the depth image is subtracted from the corresponding pixel in the reference image and the result is stored as the corresponding pixel the motion image.

If the pixel value in the new depth image is the same as the pixel value in the reference image, then there is no motion detected for the pixel. If the difference between the reference image and the new depth image is positive, then there is motion towards capture device 20. If the difference between the reference image and the depth image is negative, then there is motion away from the capture device 20.

In one embodiment, the process for creating the motion image will compare a threshold to the difference between the reference image and new depth image, on a pixel-by-pixel basis, so that small variations will not be detected as motion. Additionally, some embodiments will discard backward motion data (away from the camera) and only report forward motion (toward the camera). In some implementation, the system will track the magnitude of the motion (e.g., the difference between the reference image pixel and depth image pixel), while in other embodiments, the system will only store a Boolean value in the motion image to indicate whether there is motion or not.

FIG. 9 is a flowchart describing one embodiment or a process for creating a motion image based on the new depth image received and the reference image that includes subtracting pixels, using a threshold and discarding backward motion. Thus, the process of FIG. 9 is one example implementation of step 462 of FIG. 8. In step 500 of FIG. 9, the system will access a pixel that has not already been operated on in the depth image. In step 502, the system will access the corresponding pixel in the reference image. In step 504, the system will subtract the pixels by subtracting the pixel in the new depth image from the pixel in the reference image. If the resulting magnitude is negative, then a zero is inserted into the corresponding pixel in the motion image in step 514 because the system is discarding backward motion. If the difference in pixels was not negative, then in step 508 it is determined whether the difference is greater than a threshold (e.g. 2 centimeters per meter). If the difference is not greater than the threshold, then in step 514 a zero is added to the corresponding pixel in the motion image because the difference is not great enough to be classified as forward motion. This technique helps reduce noise. If the difference is greater than the threshold, then the system will conclude that there is motion if forward motion (e.g., toward the camera), and in step 510, the difference data is inserted in corresponding pixel in the motion image. In another embodiment, rather than adding the difference data, a Boolean value (e.g., 1) is added to the motion image in step 510. In step 512, it is determined whether there are more pixels in the depth image that need to be operated on. If there are no more pixels in the depth image to be operated on, then the process of FIG. 9 is complete. If there are more pixels in the depth image to operate on, the process moves back to step 500 and accesses the next pixel in the depth image that has not been operated on. At the end of the process of FIG. 9, the motion image is created and includes either a 1 or 0 at each pixel. A zero means no motion (or discarded backward motion). A 1 for a pixel value indicates that there was forward motion for that pixel.

FIG. 10 shows one example of a motion image created using the process of FIG. 9. As can be seen, some of the pixels have a data value 1 indicating that those pixels represent forward motion. The pixels for which there is no motion would have data zero; however, to make the drawing easier to read, the zeros have been left blank.

In the above discussion, the comparison between the newly received depth image and the reference image is a simple subtracting and thresholding of values. More sophisticated embodiments may use mean squared error, standard deviation, difference of means or other statistical measures to compare the two data sets. This comparison may be done at the image level, pixel level or some other intermediate granularity of the image.

In the above discussion, there was only one reference image that was maintained and compared against the newly received depth images. In other embodiments, the system can use more than one reference image. For example, the system can create and maintain two or more reference images that averaged the depth data over differing, or even randomized, time constants. Comparison against multiple reference images can increase likelihood that moving objects will be properly identified. In such an embodiment, the new depth image is compared against multiple reference images. Any motion detected from any other comparisons will be used to add a 1 to the appropriate pixel in the created motion image. Other schemes for comparing multiple reference images to a depth image can also be used.

Looking back at FIG. 8, after the motion image is created in step 462, the reference image is updated in step 464 based on the new depth image that was just received in the most recent iteration of step 460. As discussed above, in one embodiment, the reference image is a moving average of the most recent N frames of depth images. Since the new depth image is received, the average must move based on the newly received frame. Therefore, Equation (1) is used on each pixel of the reference image with “old average” being the data from the existing reference image and “new data” being the data from the newly received depth image. Equation (1) is performed for each pixel of the reference image to update the existing reference image to create a newly updated reference image in step 464.

Once regions of the depth image have been identified as moving, it is useful to segment them into individual groups and track their locations over time. If multiple objects are detected by the system, the output is a collection of pixels that have been identified as moving. To group these pixels into individual objects, the system can use a method of segmentation or grouping called connected component analysis. Neighboring pixels that are also identified as moving are considered connected and therefore part of the same group. Once all of the pixels have been accounted for, the result is a set of groups that represent potential moving objects in the scene. Alternative methods determining which pixels of part of the same group can also be used such as thresholding Euclidian 3D distance or surface distance. Another alternative is to use clustering methods where a fixed number of groups are hypothesized, pixels are associated with each hypothetical group, and then the overall hypothesis is scored based on how well it explained the data. Another method is to maintain the probability that a pixel belongs to each possible group rather than directly associating it with a single group. This may be valuable in scenarios where the tracking system that maintains group assignment between frames can handle ambiguity in pixel association making it more robust in some application scenarios.

Looking back at FIG. 8, step 466 of FIG. 8 includes grouping the pixels of the motion image that represent motion (e.g., store data 1). In one embodiment, the pixels are grouped based on proximity to each other. FIG. 11 is a flowchart describing one embodiment of a process for grouping pixels of motion image based on proximity. Thus, FIG. 8 is one example implementation of step 466 of FIG. 8. In step 702, the system will access an ungrouped pixel in the motion image that indicates motion (e.g., has data 1). In step 704, the system will identify all connected pixels that include the pixel accessed in step 702 as one group. For example, if a pixel has the data 1, the system looks for all pixels that neighbor that pixel that are also data 1, and of those neighbors with data 1, the system will look for all their neighboring pixels that also have data 1, and so on, until all connected contiguous and continuous pixels showing data 1 are grouped into a group. Then in step 706, the system determines whether there are any ungrouped pixels that have not been considered yet. If so, the process moves back to step 702 and accesses another ungrouped pixel and attempts to find connected pixels. If, in step 706, it is determined that there are no more ungrouped pixels, then in step 708, the system will identify the center of each group. When step 708 is performed, each of the pixels is in a group. It is possible that some groups will only have one pixel. For groups that have more than one pixel, the system will determine the geometric center of the group and identify the x and y coordinate (in pixel space) of that group. Note that FIG. 10 shows the motion image with the pixels grouped. For example, a first set of pixels are grouped as depicted by oval 602 and a second set of pixels are grouped as depicted by oval 604.

Groups containing a small number of pixels may be the result of noise in the depth image that exceeds the threshold limit in the motion detection step. To further filter out interference from noise, one embodiment may require a minimum pixel count or minimum physical size of a group to perform further motion analysis.

Looking back at FIG. 8, after the pixels of the motion image are grouped in step 466, the groups are associated with objects being tracked in step 468. It is often valuable to track those objects over time and build an understanding that a moving object in one image is the same object in the following image. In one embodiment, the system will create object history data which stores information about the motion of objects being tracked. For example, this object history data can store an identification of all of the various objects being tracked and prior positions of the objects in the motion image. There are many methods for tracking the objects that can be used. No one particular method for tracking is required. In one example, a spatial likelihood tracking approach can be used that records the positions of each object over time and reassociates new motion with whatever object it is most likely to be. One example is to associate movement with the closest identified object in the previous image in the sequence. Alternative embodiments include predicting the trajectory of the moving object based on recent movements. The system can also model movement when the object is known to be a subcomponent of a larger object, such as an arm or a larger body.

FIGS. 12 and 13 provide two embodiments of processes for associating groups of pixels in the motion image with objects being tracked (step 468 of FIG. 8). The process of FIG. 12 associates groups of pixels with objects by associating a group with the closest identified object in the previous image in the sequence. In step 804 of FIG. 12, the system will access the object history data discussed above. Thus, the system will have information about all the objects previously tracked and their positions in the previous motion images. The object history data can be stored in any suitable type of data structure. In step 806, the system will attempt to associate the center of each group in the current motion image with the object of closest proximity in the most recent motion image. In some embodiments, there will be a threshold distance so that the association must be reasonable. For example, if the closest object is halfway across the image, that may not be a reasonable association and will be discarded. The particular threshold used for the association will vary based on implementation and experimentation.

In some embodiments, the depth image process and client reporting module 194 performing the association of step 804 will make use of the information in gestures library 190 or a structure data 192 to associate groups with objects. For example, based on known shapes, the system can correlate a group with an existing object. If an object being tracked is a person, external structure data can be used to identify the shape of a person which will help the system better associate a group of pixels in the motion image with the person. Additionally, if the system knows it has previously been tracking a person with an arm moving, the external structure data 192 and the gestures library 190 could teach the system about probable movement of an arm, leg or other body part so the system can more readily identify the object. Similarly, the system may be able to recognize an inanimate object such as a ball or tennis racket based on references or templates in structure data 192.

Step 806 attempts to assign every group to an object being tracked. In some embodiments, some groups may not be assignable. In one embodiment, any group that cannot be assigned to an object will be assumed to be a new object. In step 808, any unassociated groups have new objects created and these unassociated groups are assigned to the new objects.

If moving objects come into close proximity to each other they may appear to merge into a single region. In some applications, it may be desirable to try to separate the single region back into the individual objects based on previous observation. One embodiment includes segmenting the pixels based on their proximity to the center of the previous objects. Pixels are associated with whatever previous objects they were closest to. The distance metric may be as simple as Euclidian distance, surface distance or other representation of distance.

In step 810, the system determines whether two objects from a previous motion image have merged in the current motion image. That is, if there are two objects in the previous motion image and the current image has only one object in a similar or proximal location as the two objects in the previous image, the system can determine that the two objects have merged. If the system determines that there have not been objects that have merged, then the process continues at step 812 and the object history data discussed above is updated so that all groups in the current motion image have their center coordinates (x,y) used to update the position of the objects being tracked. If, in step 810, the system determines that the two objects have merged, then the objects are separated by grouping the pixels based on proximity to the separate objects in the previous motion images in step 814. In step 816, the separated groups are assigned to the appropriate objects from the previous motion image. In step 812, after step 816, the objects history data is updated so that all groups in the current motion image have their center coordinates (x,y) used to update the position of the objects being tracked.

FIG. 13 is another embodiment of associating groups to objects (step 468 of FIG. 8) that is based on predicting the trajectory of moving objects using the object history data. In step 850 of FIG. 13, the system will access the object history data discussed above. In step 852, using that data, the system can predict the trajectory for each object being tracked. The system knows the x and y coordinates in the motion images for previous positions of the object. This data can be used to determine a trajectory and predict where those objects should be in the current motion image. In step 854, the system attempts to associate the center of each of the groups in the current image with the predicted trajectories. The system can also use the structure data 192 and gesture library 190 discussed above to predict the trajectories. In step 858, any object that was not associated in step 854 is assigned to be a new object in step 858. In step 860, the system determines whether two objects have merged, as discussed above. If no objects have merged, then the object history data is updated in step 862 to include the information from the current motion image. If two objects have merged (step 860), then in step 864 those groups are separated as discussed above with respect to step 814. In step 866, the newly separated groups are assigned to the previous objects (same as step 816). In step 862, the object history data is updated, as discussed above.

Looking back at FIG. 8, after associating the groups with objects being tracked the information about the objects is reported to the application in step 470. In one example embodiment, steps 460-470 are performed by depth image processing and object reporting module 194. In step 470, depth image processing and object reporting module 194 reports to application 196 an identification of all the objects it is tracking and the (X, Y) positions of each of those objects in the current motion image. In step 472, application 196 will update based on the reported object information from step 470. The tracking of the objects can be mapped directly to cursor control, where moving along the perpendicular plane defines the two dimensional position of the cursor and movement and depth can trigger other events. In such an embodiment, step 470 would also include reporting depth (distance) information for each object. For example, the depth values for all the pixels in the group associated with the object can be averaged and that data can be the depth number reported for the particular object. Alternatively, all of the depth values can be reported to the application or the depth value for the center of each group can be reported.

In other embodiments, each object can correlate to an image being displayed on a monitor as part of a video game or other software application. When any of the objects move, application 196 will update the positions of the images on the monitor for the object that moved. For example, if a person moves, the person's avatar in the video game may move. If person throws the ball, an image of the ball may move in the video game. There are many different ways an application can update itself based on the motion of the tracked objects. No particular way for updating the application is required for the technology described herein.

The object history data may also incorporate information about neighboring objects or the structure of a larger object provided by an external system (e.g. structure data 192). For example, if a moving object is identified to be the left arm of a human body or a human head, the system can infer that a certain set of pixels pertains to the left hand. Other variations can also be implemented.

Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. It is intended that the scope of the technology be defined by the claims appended hereto.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US462762026 Dec 19849 Dec 1986Yang John PElectronic athlete trainer for improving skills in reflex, speed and accuracy
US463091016 Feb 198423 Dec 1986Robotic Vision Systems, Inc.Method of measuring in three-dimensions at high speed
US464545815 Apr 198524 Feb 1987Harald PhillipAthletic evaluation and training apparatus
US469595314 Apr 198622 Sep 1987Blair Preston ETV animation interactively controlled by the viewer
US470247525 Jul 198627 Oct 1987Innovating Training Products, Inc.Sports technique and reaction training system
US471154329 Jan 19878 Dec 1987Blair Preston ETV animation interactively controlled by the viewer
US475164229 Aug 198614 Jun 1988Silva John MInteractive sports simulation system with physiological sensing and psychological conditioning
US479699721 May 198710 Jan 1989Synthetic Vision Systems, Inc.Method and system for high-speed, 3-D imaging of an object at a vision station
US48090651 Dec 198628 Feb 1989Kabushiki Kaisha ToshibaInteractive system and related method for displaying data to produce a three-dimensional image of an object
US48179508 May 19874 Apr 1989Goo Paul EVideo game control unit and attitude sensor
US484356811 Apr 198627 Jun 1989Krueger Myron WReal time perception of and response to the actions of an unencumbered participant/user
US489318311 Aug 19889 Jan 1990Carnegie-Mellon UniversityRobotic vision system
US49013628 Aug 198813 Feb 1990Raytheon CompanyMethod of recognizing patterns
US492518913 Jan 198915 May 1990Braeunig Thomas FBody-mounted video game exercise device
US510144418 May 199031 Mar 1992Panacea, Inc.Method and apparatus for high speed object location
US51481544 Dec 199015 Sep 1992Sony Corporation Of AmericaMulti-dimensional user interface
US518429516 Oct 19892 Feb 1993Mann Ralph VSystem and method for teaching physical skills
US522975411 Feb 199120 Jul 1993Yazaki CorporationAutomotive reflection type display apparatus
US522975614 May 199220 Jul 1993Yamaha CorporationImage control apparatus
US52394639 Dec 199124 Aug 1993Blair Preston EMethod and apparatus for player interaction with animated characters and objects
US52394649 Dec 199124 Aug 1993Blair Preston EInteractive video system providing repeated switching of multiple tracks of actions sequences
US528807816 Jul 199222 Feb 1994David G. CapperControl interface apparatus
US529549126 Sep 199122 Mar 1994Sam Technology, Inc.Non-invasive human neurocognitive performance capability testing method and system
US532053823 Sep 199214 Jun 1994Hughes Training, Inc.Interactive aircraft training system and method
US534730617 Dec 199313 Sep 1994Mitsubishi Electric Research Laboratories, Inc.Animated electronic meeting place
US538551919 Apr 199431 Jan 1995Hsu; Chi-HsuehRunning machine
US54051528 Jun 199311 Apr 1995The Walt Disney CompanyMethod and apparatus for an interactive video game with physical feedback
US541721027 May 199223 May 1995International Business Machines CorporationSystem and method for augmentation of endoscopic surgery
US542355424 Sep 199313 Jun 1995Metamedia Ventures, Inc.Virtual reality game method and apparatus
US545404330 Jul 199326 Sep 1995Mitsubishi Electric Research Laboratories, Inc.Dynamic and static hand gesture recognition through low-level image analysis
US54697402 Dec 199228 Nov 1995Impulse Technology, Inc.Interactive video testing and training system
US549557611 Jan 199327 Feb 1996Ritchey; Kurtis J.Panoramic image based virtual reality/telepresence audio-visual system and method
US55161056 Oct 199414 May 1996Exergame, Inc.Acceleration activated joystick
US552463729 Jun 199411 Jun 1996Erickson; Jon W.Interactive system for measuring physiological exertion
US55349179 May 19919 Jul 1996Very Vivid, Inc.Video image based control system
US55639881 Aug 19948 Oct 1996Massachusetts Institute Of TechnologyMethod and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US55779814 Aug 199526 Nov 1996Jarvik; RobertVirtual reality exercise machine and computer controlled video system
US558024914 Feb 19943 Dec 1996Sarcos GroupApparatus for simulating mobility of a human
US559446921 Feb 199514 Jan 1997Mitsubishi Electric Information Technology Center America Inc.Hand gesture machine control system
US559730928 Mar 199428 Jan 1997Riess; ThomasMethod and apparatus for treatment of gait problems associated with parkinson's disease
US561607827 Dec 19941 Apr 1997Konami Co., Ltd.Motion-controlled video entertainment system
US561731218 Nov 19941 Apr 1997Hitachi, Ltd.Computer system that enters control information by means of video camera
US56383005 Dec 199410 Jun 1997Johnson; Lee E.Golf swing analysis system
US564128811 Jan 199624 Jun 1997Zaenglein, Jr.; William G.Shooting simulating process and training device using a virtual reality display screen
US568219622 Jun 199528 Oct 1997Actv, Inc.Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers
US568222914 Apr 199528 Oct 1997Schwartz Electro-Optics, Inc.Laser range camera
US56905821 Jun 199525 Nov 1997Tectrix Fitness Equipment, Inc.Interactive exercise apparatus
US57033678 Dec 199530 Dec 1997Matsushita Electric Industrial Co., Ltd.Human occupancy detection method and system for implementing the same
US570483725 Mar 19946 Jan 1998Namco Ltd.Video game steering system causing translation, rotation and curvilinear motion on the object
US571583416 May 199510 Feb 1998Scuola Superiore Di Studi Universitari & Di Perfezionamento S. AnnaDevice for monitoring the configuration of a distal physiological unit for use, in particular, as an advanced interface for machine and computers
US58751086 Jun 199523 Feb 1999Hoffberg; Steven M.Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US58778037 Apr 19972 Mar 1999Tritech Mircoelectronics International, Ltd.3-D image detector
US591270010 Jan 199615 Jun 1999Fox Sports Productions, Inc.System for enhancing the television presentation of an object at a sporting event
US591372713 Jun 199722 Jun 1999Ahdoot; NedInteractive movement and contact simulation game
US593312527 Nov 19953 Aug 1999Cae Electronics, Ltd.Method and apparatus for reducing instability in the display of a virtual environment
US598025613 Feb 19969 Nov 1999Carmein; David E. E.Virtual reality system with enhanced sensory apparatus
US598915711 Jul 199723 Nov 1999Walton; Charles A.Exercising system with electronic inertial game playing
US599564922 Sep 199730 Nov 1999Nec CorporationDual-input image processor for recognizing, isolating, and displaying specific objects from the input images
US600554814 Aug 199721 Dec 1999Latypov; Nurakhmed NurislamovichMethod for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US60092105 Mar 199728 Dec 1999Digital Equipment CorporationHands-free interface to a virtual reality environment using head tracking
US605499129 Jul 199425 Apr 2000Texas Instruments IncorporatedMethod of modeling player position and movement in a virtual reality system
US606607529 Dec 199723 May 2000Poulton; Craig K.Direct feedback controller for user interaction
US607249415 Oct 19976 Jun 2000Electric Planet, Inc.Method and apparatus for real-time gesture recognition
US60734893 Mar 199813 Jun 2000French; Barry J.Testing and training system for assessing the ability of a player to complete a task
US607720112 Jun 199820 Jun 2000Cheng; Chau-YangExercise bicycle
US60984586 Nov 19958 Aug 2000Impulse Technology, Ltd.Testing and training system for assessing movement and agility skills without a confining field
US610089624 Mar 19978 Aug 2000Mitsubishi Electric Information Technology Center America, Inc.System for designing graphical multi-participant environments
US610128915 Oct 19978 Aug 2000Electric Planet, Inc.Method and apparatus for unencumbered capture of an object
US612800322 Dec 19973 Oct 2000Hitachi, Ltd.Hand gesture recognition system and method
US613067715 Oct 199710 Oct 2000Electric Planet, Inc.Interactive computer vision system
US613394611 Mar 199817 Oct 2000Sportvision, Inc.System for determining the position of an object
US61414633 Dec 199731 Oct 2000Electric Planet InteractiveMethod and system for estimating jointed-figure configurations
US61476789 Dec 199814 Nov 2000Lucent Technologies Inc.Video hand image-three-dimensional computer interface with multiple degrees of freedom
US61528568 May 199728 Nov 2000Real Vision CorporationReal time simulation using position sensing
US615910023 Apr 199812 Dec 2000Smith; Michael D.Virtual reality game
US617306621 May 19979 Jan 2001Cybernet Systems CorporationPose determination and tracking by matching 3D objects to a 2D sensor
US618134323 Dec 199730 Jan 2001Philips Electronics North America Corp.System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US618877722 Jun 199813 Feb 2001Interval Research CorporationMethod and apparatus for personnel detection and tracking
US621589025 Sep 199810 Apr 2001Matsushita Electric Industrial Co., Ltd.Hand gesture recognizing device
US621589815 Apr 199710 Apr 2001Interval Research CorporationData processing system and method
US622639631 Jul 19981 May 2001Nec CorporationObject extraction method and system
US62299137 Jun 19958 May 2001The Trustees Of Columbia University In The City Of New YorkApparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
US6256033 *10 Aug 19993 Jul 2001Electric PlanetMethod and apparatus for real-time gesture recognition
US625640028 Sep 19993 Jul 2001Matsushita Electric Industrial Co., Ltd.Method and device for segmenting hand gestures
US628386014 Aug 20004 Sep 2001Philips Electronics North America Corp.Method, system, and program for gesture based option selection
US628911225 Feb 199811 Sep 2001International Business Machines CorporationSystem and method for determining block direction in fingerprint images
US629930831 Mar 20009 Oct 2001Cybernet Systems CorporationLow-cost non-imaging eye tracker system for computer control
US630856515 Oct 199830 Oct 2001Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
US631693430 Jun 199913 Nov 2001Netmor Ltd.System for three dimensional positioning and tracking
US636316022 Jan 199926 Mar 2002Intel CorporationInterface using pattern recognition and tracking
US638481915 Oct 19987 May 2002Electric Planet, Inc.System and method for generating an animatable character
US641174415 Oct 199825 Jun 2002Electric Planet, Inc.Method and apparatus for performing a clean background subtraction
US64309975 Sep 200013 Aug 2002Trazer Technologies, Inc.System and method for tracking and assessing movement skills in multidimensional space
US647683428 May 19995 Nov 2002International Business Machines CorporationDynamic creation of selectable items on surfaces
US64965981 Mar 200017 Dec 2002Dynamic Digital Depth Research Pty. Ltd.Image processing method and apparatus
US650319524 May 19997 Jan 2003University Of North Carolina At Chapel HillMethods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
US65128385 Oct 200028 Jan 2003Canesta, Inc.Methods for enhancing performance and data acquired from three-dimensional image systems
US653993116 Apr 20011 Apr 2003Koninklijke Philips Electronics N.V.Ball throwing assistant
US657055530 Dec 199827 May 2003Fuji Xerox Co., Ltd.Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US66332949 Mar 200114 Oct 2003Seth RosenthalMethod and apparatus for using captured high density motion for animation
US664020225 May 200028 Oct 2003International Business Machines CorporationElastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications
US66581366 Dec 19992 Dec 2003Microsoft CorporationSystem and process for locating and tracking a person or object in a scene using a series of range images
US66619183 Dec 19999 Dec 2003Interval Research CorporationBackground estimation and segmentation based on range and color
US66748773 Feb 20006 Jan 2004Microsoft CorporationSystem and method for visually tracking occluded objects in real time
US668103110 Aug 199920 Jan 2004Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US67146653 Dec 199630 Mar 2004Sarnoff CorporationFully automated iris recognition system utilizing wide and narrow fields of view
US672863722 Jun 200127 Apr 2004Sportvision, Inc.Track model constraint for GPS position
US67317991 Jun 20004 May 2004University Of WashingtonObject segmentation with background extraction and moving boundary techniques
US673806630 Jul 199918 May 2004Electric Plant, Inc.System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display
US676572617 Jul 200220 Jul 2004Impluse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
US678880930 Jun 20007 Sep 2004Intel CorporationSystem and method for gesture recognition in three dimensions using stereo imaging and color vision
US680163722 Feb 20015 Oct 2004Cybernet Systems CorporationOptical body tracker
US687372330 Jun 199929 Mar 2005Intel CorporationSegmenting three-dimensional video images using stereo
US68764969 Jul 20045 Apr 2005Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
US693774228 Sep 200130 Aug 2005Bellsouth Intellectual Property CorporationGesture activated home appliance
US695053416 Jan 200427 Sep 2005Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US70031348 Mar 200021 Feb 2006Vulcan Patents LlcThree dimensional object pose estimation which employs dense depth information
US700313626 Apr 200221 Feb 2006Hewlett-Packard Development Company, L.P.Plan-view projections of depth image data for object tracking
US703609431 Mar 200025 Apr 2006Cybernet Systems CorporationBehavior recognition system
US70388555 Apr 20052 May 2006Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
US703967631 Oct 20002 May 2006International Business Machines CorporationUsing video image analysis to automatically transmit gestures over a network in a chat or instant messaging session
US704244021 Jul 20039 May 2006Pryor Timothy RMan machine interfaces and applications
US70506061 Nov 200123 May 2006Cybernet Systems CorporationTracking and gesture recognition system particularly suited to vehicular control applications
US705820426 Sep 20016 Jun 2006Gesturetek, Inc.Multiple camera control system
US706095726 Mar 200113 Jun 2006Csem Centre Suisse D'electronique Et Microtechinique SaDevice and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US707555621 Oct 199911 Jul 2006Sportvision, Inc.Telestrator system
US71139181 Aug 199926 Sep 2006Electric Planet, Inc.Method for video enabled electronic commerce
US712194629 Jun 200117 Oct 2006Cybernet Systems CorporationReal-time head tracking system for computer games and other applications
US717049218 Mar 200530 Jan 2007Reactrix Systems, Inc.Interactive video display system
US718404818 Oct 200127 Feb 2007Electric Planet, Inc.System and method for generating an animatable character
US720289816 Dec 199810 Apr 20073Dv Systems Ltd.Self gating photosurface
US722207810 Dec 200322 May 2007Ferrara Ethereal LlcMethods and systems for gathering information from units of a commodity across a network
US722752623 Jul 20015 Jun 2007Gesturetek, Inc.Video-based image control system
US725974728 May 200221 Aug 2007Reactrix Systems, Inc.Interactive video display system
US730811212 May 200511 Dec 2007Honda Motor Co., Ltd.Sign based human-machine interaction
US731783617 Mar 20068 Jan 2008Honda Motor Co., Ltd.Pose estimation based on critical point analysis
US73489635 Aug 200525 Mar 2008Reactrix Systems, Inc.Interactive video display system
US73591211 May 200615 Apr 2008Impulse Technology Ltd.System and method for tracking and assessing movement skills in multidimensional space
US736632512 Oct 200429 Apr 2008Honda Motor Co., Ltd.Moving object detection using low illumination depth capable computer vision
US73678877 Jul 20036 May 2008Namco Bandai Games Inc.Game apparatus, storage medium, and computer program that adjust level of game difficulty
US737956315 Apr 200527 May 2008Gesturetek, Inc.Tracking bimanual movements
US73795666 Jan 200627 May 2008Gesturetek, Inc.Optical flow based tilt sensor
US738959117 May 200624 Jun 2008Gesturetek, Inc.Orientation-sensitive signal output
US741207729 Dec 200612 Aug 2008Motorola, Inc.Apparatus and methods for head pose estimation and head gesture detection
US742109319 Dec 20052 Sep 2008Gesturetek, Inc.Multiple camera control system
US74303129 Jan 200630 Sep 2008Gesturetek, Inc.Creating 3D images of objects by illuminating with infrared patterns
US743649628 Jan 200414 Oct 2008National University Corporation Shizuoka UniversityDistance image sensor
US745073626 Oct 200611 Nov 2008Honda Motor Co., Ltd.Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers
US745227525 Jun 200218 Nov 2008Konami Digital Entertainment Co., Ltd.Game device, game controlling method and program
US746069014 Sep 20052 Dec 2008Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US74898127 Jun 200210 Feb 2009Dynamic Digital Depth Research Pty Ltd.Conversion and encoding techniques
US753603225 Oct 200419 May 2009Reactrix Systems, Inc.Method and system for processing captured image information in an interactive video display system
US755514231 Oct 200730 Jun 2009Gesturetek, Inc.Multiple camera control system
US756070111 Aug 200614 Jul 2009Mesa Imaging AgHighly sensitive, fast pixel for use in an image sensor
US757080523 Apr 20084 Aug 2009Gesturetek, Inc.Creating 3D images of objects by illuminating with infrared patterns
US75740207 Apr 200811 Aug 2009Gesturetek, Inc.Detecting and tracking objects in images
US757672715 Dec 200318 Aug 2009Matthew BellInteractive directed light/sound system
US759026221 Apr 200815 Sep 2009Honda Motor Co., Ltd.Visual tracking using depth data
US759355222 Mar 200422 Sep 2009Honda Motor Co., Ltd.Gesture recognition apparatus, gesture recognition method, and gesture recognition program
US75989428 Feb 20066 Oct 2009Oblong Industries, Inc.System and method for gesture based control system
US760750913 Jan 200327 Oct 2009Iee International Electronics & Engineering S.A.Safety device for a vehicle
US762020214 Jun 200417 Nov 2009Honda Motor Co., Ltd.Target orientation estimation using depth sensing
US76683402 Dec 200823 Feb 2010Cybernet Systems CorporationGesture-controlled interfaces for self-service machines and other applications
US76802988 Oct 200816 Mar 2010At&T Intellectual Property I, L. P.Methods, systems, and products for gesture-activated appliances
US768395427 Sep 200723 Mar 2010Brainvision Inc.Solid-state image sensor
US768459214 Jan 200823 Mar 2010Cybernet Systems CorporationRealtime object tracking system
US770143913 Jul 200620 Apr 2010Northrop Grumman CorporationGesture recognition simulation system and method
US770213029 Sep 200520 Apr 2010Electronics And Telecommunications Research InstituteUser interface apparatus using hand gesture recognition and method thereof
US770413523 Aug 200527 Apr 2010Harrison Jr Shelton EIntegrated game system, method, and device
US771039120 Sep 20044 May 2010Matthew BellProcessing an image utilizing a spatially varying pattern
US77295303 Mar 20071 Jun 2010Sergey AntonovMethod and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system
US774634527 Feb 200729 Jun 2010Hunter Kevin LSystem and method for generating an animatable character
US776018221 Aug 200620 Jul 2010Subutai AhmadMethod for video enabled electronic commerce
US780916719 May 20095 Oct 2010Matthew BellMethod and system for processing captured image information in an interactive video display system
US783484621 Aug 200616 Nov 2010Matthew BellInteractive video display system
US785226215 Aug 200814 Dec 2010Cybernet Systems CorporationWireless mobile indoor/outdoor tracking system
US78985221 Jun 20071 Mar 2011Gesturetek, Inc.Video-based image control system
US803561220 Sep 200411 Oct 2011Intellectual Ventures Holding 67 LlcSelf-contained interactive video display system
US803561430 Oct 200711 Oct 2011Intellectual Ventures Holding 67 LlcInteractive video window
US803562430 Oct 200711 Oct 2011Intellectual Ventures Holding 67 LlcComputer vision based touch screen
US807247029 May 20036 Dec 2011Sony Computer Entertainment Inc.System and method for providing a real-time three-dimensional interactive environment
US83744232 Mar 201212 Feb 2013Microsoft CorporationMotion detection using depth images
US2003021914623 May 200227 Nov 2003Jepson Allan D.Visual motion analysis method for detecting arbitrary numbers of moving objects in image sequences
US2005005833714 Jun 200417 Mar 2005Kikuo FujimuraTarget orientation estimation using depth sensing
US2005019097211 Feb 20051 Sep 2005Thomas Graham A.System and method for position determination
US20070110298 *14 Nov 200517 May 2007Microsoft CorporationStereo video for gaming
US2008002683821 Aug 200631 Jan 2008Dunstan James EMulti-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games
US200801521919 Oct 200726 Jun 2008Honda Motor Co., Ltd.Human Pose Estimation and Tracking Using Label Assignment
US200801660451 Mar 200610 Jul 2008Li-Qun XuMethod of Tracking Objects in a Video Sequence
US200901419334 Dec 20084 Jun 2009Sony CorporationImage processing apparatus and method
US2009017554019 Dec 20089 Jul 2009Honda Motor Co., Ltd.Controlled human pose estimation from depth image streams
US2009022136826 Apr 20093 Sep 2009Ailive Inc.,Method and system for creating a shared game space for a networked game
US2011015027118 Dec 200923 Jun 2011Microsoft CorporationMotion detection using depth images
USRE422569 Jan 200929 Mar 2011Elet Systems L.L.C.Method and apparatus for performing a clean background subtraction
CN101254344B18 Apr 200816 Jun 2010李刚Game device of field orientation corresponding with display screen dot array in proportion and method
EP0583061A25 Jul 199316 Feb 1994The Walt Disney CompanyMethod and apparatus for providing enhanced graphics in a virtual world
JP08044490A1 Title not available
WO2093/10708A1 Title not available
WO2097/17598A1 Title not available
WO2099/44698A1 Title not available
WO2009059065A130 Oct 20087 May 2009Hewlett-Packard Development Company, L.P.Interactive display system with collaborative gesture detection
Non-Patent Citations
Reference
1"Simulation and Training", 1994, Division Incorporated.
2"Virtual High Anxiety", Tech Update, Aug. 1995, pp. 22.
3Aggarwal et al., "Human Motion Analysis: A Review", IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX.
4Aggarwal, "Human Motion Analysis: A Review," Published Date: Mar. 1999 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.6195&rep=rep1&type=pdf.
5Azarbayejani et al., "Visually Controlled Graphics", Jun. 1993, vol. 15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence.
6Breen et al., "Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality", Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany.
7Brogan et al., "Dynamically Simulated Characters in Virtual Environments", Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications.
8English Machine-translation of Japanese Publication No. JP08-044490 published on Feb. 16, 1996.
9Fisher et al., "Virtual Environment Display System", ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC.
10Freeman et al., "Television Control by Hand Gestures", Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA.
11Granieri et al., "Simulating Humans in VR", The British Computer Society, Oct. 1994, Academic Press.
12Hasegawa et al., "Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator", Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY.
13He, "Generation of Human Body Models", Apr. 2005, University of Auckland, New Zealand.
14Hongo et al., "Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras", Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France.
15Isard et al., "Condensation-Conditional Density Propagation for Visual Tracking", 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands.
16Isard et al., "Condensation—Conditional Density Propagation for Visual Tracking", 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands.
17Kam, "A Real-time 3D Motion Tracking System," Published Date: Apr. 1993 https://circle.ubc.ca/bitstream/2429/2545/3/ubc-1993-spring-kam-johnny.pdf.
18Kam, "A Real-time 3D Motion Tracking System," Published Date: Apr. 1993 https://circle.ubc.ca/bitstream/2429/2545/3/ubc—1993—spring—kam—johnny.pdf.
19Kanade et al., "A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications", IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.
20Kohler, "Special Topics of Gesture Recognition Applied in Intelligent Home Environments", In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany.
21Kohler, "Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments", 1997, Germany.
22Kohler, "Vision Based Remote Control in Intelligent Home Environments", University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany.
23Livingston, "Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality", 1998, University of North Carolina at Chapel Hill, North Carolina, USA.
24Micilotta, "The Tracking of Hands for the Development of a Gesture Recognition Algorithm," Published Date: Oct. 18, 2000, http://www.amicilotta.airpost.net/documents/micilotta-undergrad-dissertation.pdf.
25Micilotta, "The Tracking of Hands for the Development of a Gesture Recognition Algorithm," Published Date: Oct. 18, 2000, http://www.amicilotta.airpost.net/documents/micilotta—undergrad—dissertation.pdf.
26Miyagawa et al., "CCD-Based Range Finding Sensor", Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices.
27Notice of Allowance dated Oct. 18, 2012, in U.S. Appl. No. 13/410,546, filed Mar. 2, 2012.
28Office Action dated Jun. 1, 2012, in U.S. Appl. No. 13/410,546, filed Mar. 2, 2012.
29Pavlovic et al., "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review", Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence.
30Qian et al., "A Gesture-Driven Multimodal Interactive Dance System", Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan.
31Rosenhahn et al., "Automatic Human Model Generation", 2005, pp. 41-48, University of Auckland (CITR), New Zealand.
32Rybski, "Cameo: Camera Assisted Meeting Event Observer," Published Date: Apr. 2004 http://www.cs.cmu.edu/~mmv/papers/04icra-cameo.pdf.
33Rybski, "Cameo: Camera Assisted Meeting Event Observer," Published Date: Apr. 2004 http://www.cs.cmu.edu/˜mmv/papers/04icra-cameo.pdf.
34Shao et al., "An Open System Architecture for a Multimedia and Multimodal User Interface", Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan.
35Sheridan et al., "Virtual Reality Check", Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7.
36Shivappa, et al., "Person Tracking With Audio-visual Cues Using the Iterative Decoding Framework," IEEE 5th International Conference on Advanced Video and Signal Based Surveillance, 2008, pp. 260-267.
37Stevens, "Flights into Virtual Reality Treating Real-World Disorders", Science Psychology.
38Toyama, et al., "Probabilistic Tracking in a Metric Space," Eighth International Conference on Computer Vision, Vancouver, Canada, vol. 2, Jul. 2001, 8 pages.
39Wren et al., "Pfinder: Real-Time Tracking of the Human Body", MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA.
40Zhao, "Dressed Human Modeling, Detection, and Parts Localization", 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US925706114 Mar 20149 Feb 2016The Coca-Cola CompanyDisplay devices
US926928314 Mar 201423 Feb 2016The Coca-Cola CompanyDisplay devices
US956968822 Jun 201514 Feb 2017Hanwha Techwin Co., Ltd.Apparatus and method of detecting motion mask
US9628843 *21 Nov 201118 Apr 2017Microsoft Technology Licensing, LlcMethods for controlling electronic devices using gestures
US964011814 Jan 20162 May 2017The Coca-Cola CompanyDisplay devices
Classifications
U.S. Classification382/154
International ClassificationG06K9/00
Cooperative ClassificationG06T7/215, G06T7/254, G06K9/00, G06K9/00335, G06T2207/10028
Legal Events
DateCodeEventDescription
6 Feb 2013ASAssignment
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, JOHNNY;LEYVAND, TOMMER;PEEPER, CRAIG;SIGNING DATES FROM 20091217 TO 20091222;REEL/FRAME:029767/0857
9 Dec 2014ASAssignment
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541
Effective date: 20141014
4 May 2017FPAYFee payment
Year of fee payment: 4