WO2014007955A1 - Systems and methods of camera-based body-motion tracking - Google Patents

Systems and methods of camera-based body-motion tracking Download PDF

Info

Publication number
WO2014007955A1
WO2014007955A1 PCT/US2013/045433 US2013045433W WO2014007955A1 WO 2014007955 A1 WO2014007955 A1 WO 2014007955A1 US 2013045433 W US2013045433 W US 2013045433W WO 2014007955 A1 WO2014007955 A1 WO 2014007955A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
location
motion
fingertip
camera
Prior art date
Application number
PCT/US2013/045433
Other languages
French (fr)
Inventor
Kiminobu Sugaya
Andrew Sugaya
Original Assignee
University Of Central Florida Research Foundation, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/494,975 external-priority patent/US9001036B2/en
Application filed by University Of Central Florida Research Foundation, Inc. filed Critical University Of Central Florida Research Foundation, Inc.
Priority to CA2875815A priority Critical patent/CA2875815A1/en
Priority to KR1020147037139A priority patent/KR20150017370A/en
Priority to EP13812975.4A priority patent/EP2859532A4/en
Publication of WO2014007955A1 publication Critical patent/WO2014007955A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/046Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by electromagnetic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present disclosure relates to visual tracking of objects, and more specifically, to camera-based tracking of fingertips.
  • FIG. 1 depicts one embodiment of a system for camera-based fingertip tracking.
  • FIG. 2 is a hardware block diagram of one embodiment of the computer from FIG. 1 .
  • FIG. 3 illustrates a pixel grid utilized by the fingertip tracking logic from FIG. 2.
  • FIG. 4 is a flowchart of the operation in one embodiment of the fingertip tracking logic from FIG. 2.
  • FIG. 5 illustrates the operation of the mapping block from FIG. 4.
  • FIG. 6 depicts one example architecture which can be used to implement the fingertip tracking logic from FIG. 2.
  • FIG. 7 depicts another example architecture which can be used to implement the fingertip tracking logic from FIG. 2.
  • FIG.8 depicts an eyewear device embodiment that projects an image viewable to the user and which allows user input to interact with the image.
  • FIG. 9 illustrates an alternative eyewear device embodiment in the form of a contact lens.
  • FIG. 10 illustrates a system embodiment that utilizes a holographic image.
  • FIG. 1 1 illustrates a system for translating sign language information.
  • FIG. 12 depicts a flow chart of operation which can be used to translate sign language information.
  • FIG. 13 depicts a flow chart of operation which can be used to manipulate a virtual object.
  • FIG. 14 depicts a flow chart of operation for identifying and tracking body part information in video images.
  • One such method includes identifying at least one location of a fingertip in at least one of the video frames, and mapping the location to a user input based on the location of the fingertip relative to a virtual user input device.
  • One such system includes a video camera, memory and a processor.
  • the video camera is configured to produce a series of video frames.
  • the memory is configured to store program code, and the processor is programmed by the program code to: examine at least one of the video frames to identify at least one location of a fingertip within the video frame; and map the location to a one of a plurality of
  • predetermined user inputs based on the location of the fingertip relative to a virtual user input device.
  • a video camera captures motion of one or more fingertips in a series of video frames.
  • Software analyzes the video frames to identify a fingertip location and/or movement in the video frames. Based on location of the fingertip relative to a virtual user input device, the software maps the fingertip to a virtual input such as a key or mouse location, and provides this input to an application.
  • FIG. 1 depicts one embodiment of a system for camera-based fingertip tracking.
  • System 100 includes a computer 1 10 in communication with a digital video camera 120.
  • Video camera 120 captures digital images in its field of view, and fingertip tracking logic 250 (see FIG. 2) analyzes series of captured images to track the location and motion of fingertips.
  • Logic 250 then maps the fingertip location and/or motion to data that is usable by an application running on computer 1 10.
  • logic 250 maps fingertip location and/or motion to a key, a combination of keys, a mouse click, or a mouse movement.
  • video camera 120 is located near computer 1 10, with at least one fingertip 130 positioned in the field of view of video camera 120.
  • other locations for video camera 120 are possible, as long as video camera 120 is in
  • video camera 120 is separate from computer 1 10, but in other embodiments video camera 120 is integrated with computer 1 10.
  • a display 140 also shown in FIG. 1 is a display 140 (optional).
  • computer 1 10 presents a keyboard representation on display 140. The user points to locations on the keyboard representation, fingertip tracking logic 250 tracks the location and motion of fingertips relative to this keyboard representation, then maps the location and/or motion to keyboard locations.
  • computer 1 10 projects a keyboard representation onto a surface. The user's fingers are not required to touch the display or the surface in these embodiments. Instead, the user's interaction is with the representation of the keyboard rather than a physical object, since fingertip tracking logic 250 detects fingertip location and tracks fingertip motion rather than touch or contact.
  • FIG. 2 is a hardware block diagram of one embodiment of computer 1 10.
  • Computer 1 10 contains a number of components that are familiar to a person of ordinary skill in the art.
  • FIG. 2 omits a number of conventional components, known to those skilled in the art, that are not necessary to explain the operation of the computer.
  • Components of computer 1 10 include a processor 210, memory 220, and storage 230 (e.g., hard disk, flash RAM, flash ROM, EEPROM, etc.). These components are coupled via a bus 240.
  • Bus 240 may include multiple types of buses such as a dedicated memory bus, a combined memory-IO bus, and/or one or more peripheral buses (e.g., universal serial bus, IEEE 1394, etc.).
  • the computer is also in communication with video camera 120. In the example of FIG. 2, video camera 120 is connected via bus 240.
  • computer 1 10 includes a network interface (not shown), and video camera 120 communicates with computer 1 10 over a network (e.g., Ethernet, WiFi, Internet, etc.).
  • a network e.g., Ethernet, WiFi, Internet, etc.
  • fingertip tracking logic 250 is implemented as executable instructions stored in memory 220 and executed by processor 210. In other embodiments, fingertip tracking logic 250 is implemented in hardware logic (e.g., an application-specific integrated circuit).
  • Logic 250 is described herein as residing within the same computer 1 10 which contains video camera 120. However, a person of ordinary skill in the art should appreciate that the functionality of these components could be distributed in other ways. The operation of fingertip tracking logic 250 will now be described in conjunction with FIGs. 3-6.
  • Fingertip tracking logic 250 analyzes video frames captured from video camera 120, where each frame is processed as a pixel grid 310, shown in FIG. 3. At least one portion of the pixels within pixel grid 310 are identified by logic 250 as a fingertip 130 (see FIG. 1 ). In some embodiments, logic 250 identifies multiple fingertips 130, each corresponding to a different portion of pixel grid 310.
  • FIG. 4 is a flowchart of the operation in one embodiment of fingertip tracking logic 250.
  • a positioning stage (block 410) compares each pixel in a current frame with the same pixel in the previous frame. If the grayscale (or color, if color is used) of the pixel has changed significantly, the current pixel is then compared with other neighboring pixels. A pixel with changed grayscale that is also brighter than its surroundings it is considered to be a "target" pixel since fingertip 130 tends to be brighter than its surrounding area. Determining that a particular target pixel may correspond to fingertip 130 thus locates the potential fingertip's location in space. Various embodiments may track different numbers of fingertips.
  • a tracking stage (block 420) tracks the motion of the target pixel by removing the background, averaging the grayscales of corresponding pixels in the two frames, and determining motion by subtracting the pixel grayscale from the second frame.
  • Motion of these "confirmed" pixels, corresponding to fingertips, are tracked (block 440). As in block 420, the tracking is performed by removing the background, averaging the grayscales of corresponding pixels in the two frames, and determining motion by subtracting the pixel grayscale from the second frame.
  • this embodiment of fingertip tracking logic 250 maps fingertips 130 from locations within pixel grid 310 to user input (block 450), and directs the user input to applications running on the computer 1 10 (block 460).
  • the techniques described herein allow tracking of all fingertips independently and in free space, which permits greater flexibility and provides more information from gestures.
  • FIG. 5 illustrates the operation of mapping block 450 in more detail. At least a portion of pixel grid 310 corresponds to a virtual keyboard 510. Each key 520 of virtual keyboard 510 corresponds to a set of pixel locations within grid 310. In the example of FIG. 5, the number "1 " key (310) is located at grid location [3,3]. This embodiment of fingertip tracking logic 250 therefore maps fingertip 130 to the "1 " key on keyboard 510 when that fingertip 130 is located at location grid location [3,3]. In some embodiments, a key mapping is not determined until a fingertip 130 has stayed in the same location for a specific period of time.
  • mapping fingertip motion to user input treats fingertip motion as movement of a pointing device (e.g., mouse, trackball).
  • the software converts motion of the index fingertip in free space to motion of the index finger on the pixel grid, which is mapped to moving a pointer on a virtual surface.
  • One embodiment uses a special location on the grid to represent mouse actions such as a button click or double click, so that moving the index finger to this location acts as a click or double click.
  • the mouse action is not activated until a fingertip 130 has stayed in the same click or double click location for a specific period of time.
  • FIG. 6 depicts one example architecture which can be used to implement fingertip tracking logic 250.
  • Device drivers for particular input devices provide low-level events to an operating system 610.
  • This example includes a mouse driver 620 and a keyboard driver 630, which provide mouse events 640 and key events 650 to operating system 610.
  • Operating system 610 then provides user interface events 660, 670 to various applications 680 executing on computer 1 10.
  • operating system 610 translates from low-level events to corresponding user interface events, though this translation may not be present in all embodiments.
  • the key sequence "Control-A" may be received by operating system 610 as two keystroke events, but translated into a single user interface event for the application.
  • a camera driver 690 interfaces to the video camera 120 of FIG. 1 , and fingertip tracking logic 250 retrieves frames from camera driver 690.
  • Logic 250 determines fingertip locations and/or motion, then translates location and motion information (relative to the frame or grid) into user input. (This operation was described earlier in connection with FIGs. 3-5.)
  • This user input in the form of mouse-specific or keyboard-specific data, is provided to mouse driver 620 and keyboard driver 630 (respectively).
  • fingertip tracking logic 250 operates as a mouse and a keyboard from the perspective of the mouse and keyboard drivers.
  • Some embodiments of fingertip tracking logic 250 may also use an application-specific mapping 695 (e.g., specific to a word processor, spreadsheet, etc.)
  • FIG. 7 depicts another example architecture which can be used to implement fingertip tracking logic 250.
  • This architecture is similar to that of FIG. 6, but fingertip tracking logic 250 itself operates as a mouse driver (by providing a mouse driver interface 710) and as a keyboard driver (by providing a keyboard driver interface 720). That is, fingertip tracking logic 250 provides the same interfaces to operating system 610 that mouse driver 620 and keyboard driver 630 do.
  • Embodiments of fingertip tracking logic 250 can control robots from a remote location, which may be used in many different fields and applications.
  • One example is medical diagnostic and surgical procedures.
  • Another example is military applications, for example a targeting system.
  • the techniques described herein can be extended through the use of multiple cameras to implement a three-dimensional tracking system, in which fingertip movement is tracked in three coordinate planes.
  • system 100 is located in an operating room and used by a person such as a surgeon.
  • a display which displays, in the field of view of video camera 120, the output of an application such as an image viewer.
  • the user interacts with the image viewer and manipulates the displayed image (e.g., a diagnostic image of the patient) by pointing.
  • fingertip tracking logic 250 identifies the pointed-to portions of the image, and translates this to input provided to the image viewer.
  • Other types of applications are also contemplated (e.g., web browser, file manager, etc.).
  • a variation on this embodiment utilizes gloves which include a marker (e.g., color, fluorescent, etc.) in the fingertip area.
  • a marker e.g., color, fluorescent, etc.
  • an eyewear device 80 that is shown in FIG. 8.
  • the eyewear device 80 displays an image field 85' to a user that is generated on lenses 85a, b.
  • a virtual object 87' (such as a bat) is displayed in the image field, which is also generated on the lenses as 87a, b.
  • This image may be provided in a 3D or 2D perspective.
  • the eyewear device 80 includes a first camera 81 and an optional second camera 82 located on the corners of the frame.
  • the eyewear device 80 is also shown with a processor 83 and power supply 84, which would make the remote and wireless.
  • a user wearing the eyewear device 80 can manipulate the virtual object 87' by interaction with the user's hand(s), finger(s), and/or other portion of the user's body, such as according to the programming described herein.
  • a contact lens 90 that generates an image 95.
  • the lens 90 also includes a camera 91 and power source 93.
  • the contact lens is worn in conjunction with at least one camera positioned elsewhere on the user.
  • a system shown in FIG. 10 that includes a holographic generator 106, that produces a holographic image field 105'.
  • the image field 105' displays a virtual object 107', such as a heart as shown.
  • the system includes at least one camera.
  • the system includes a first and second camera 101 , 102 which can track the motions of a user's hand, finger(s), fingertips, or other portion of the user's body.
  • the system 1 10 involves the input of images from an information provider's body part (such as a hand, finger, and/or fingertip) 1 13 acquired through one or more cameras 1 1 1 , 1 12 and inputted into a computer 1 14.
  • the computer 1 14 is programmed to process, typically by a processor 1 16, the images by correlating the input of information with a unit of information stored on a computer readable medium 1 15, either in the computer 1 14, or in communication with the computer 1 14.
  • the stored information is conveyed to a receiver 1 17 in a form different from that provided by user 1 13.
  • the information conveyed to the receiver may be visual information pertaining to correlative image or words pertaining to the unit of information intended to be conveyed by the user 1 13.
  • the user may provide hand/finger/body part motion pertaining to a first sign language to convey a unit of information pertaining to an "apple”.
  • the output of information to the receiver may be a visual image of an apple, a spoken word "apple", tactile information (such as Braille) or images of hand/finger motion pertaining to a sign language word for "apple” but in a different sign language version.
  • Sign languages have different versions just like written or spoken languages are different.
  • the English version sign language motion for "apple” may be processed to produce an image for a Spanish version sign language hand/finger/bodypart motion.
  • the embodiment provides the ability to translate one version of sign language into another version of sign language, and/or translate sign language into an image or written form understandable by one who does not necessarily know sign language.
  • a computer such as 1 14 from FIG. 1 1 , is programmed to 121 -receive images from a camera pertaining to motion of a provider's hand and/or fingers regarding a first unit of sign language information; 122- process the images to correlate the motion to a stored unit of information stored on a computer readable medium; and 123 output a translated unit of information to a receiver in a form different from the first unit of information.
  • the computer 1 14 is programmed to identify from the images pixels corresponding to the hand and/or fingertip(s) and tracking motion of the hand/or fingertip(s) based on pixels in the images. For example, as illustrated in the block diagram of FIG.
  • the computer is programmed to 141 - identify target pixels corresponding to potential fingertips; 142-track motion of target pixels; 143-determine whether target pixels are confirmed pixels; and 144-track motion of confirmed pixels corresponding to fingertips. It is noted that reference to "fingertips" in FIG. 14 is intended to pertain to a hand, finger, fingers, fingertip and/or fingertips depending on the application.
  • the system includes a computer programmed to 131 - actualize a virtual object to a user; 132-acquire a series of video frames from a camera; 133- identify in a video frame target pixels corresponding to a fingertip; 134-track motion of target pixels in a subsequent video frame; 135-determine whether target pixels are confirmed pixels pertaining to the fingertip; 136-track motion of confirmed pixels corresponding to fingertip; and 137-map fingertip position and/or motion to virtual object. It is noted that reference to "fingertip" in FIG.
  • the actualizing of an image may be in the presentation of the image to a user via a screen, lens, holographic image, and the like.
  • a system that processes image information, correlates the image information with a stored unit of information, and outputs the stored unit of information in a form different to that of the original inputted form.
  • the output of information may take a number of different forms, including, but not limited to, aural, visual, electrical or neuroelectrical information interpretable by the receiver.
  • the image information may be obtained at one location and transferred via a network or internet to another location for outputting. Processing of the information may be conducted at the location of image collection or information outputting or at a separate third location. Also, the image information may be stored for later processing.
  • the aspects of the system embodiment described in this paragraph is applied to system embodiments such as described and illustrated in reference to FIGs 1 1 -13
  • the systems and methods disclosed herein can be embodied in any computer- readable medium for use by or in connection with an instruction execution system, apparatus, or device.
  • instruction execution systems include any computer-based system, processor-containing system, or other system that can fetch and execute the instructions from the instruction execution system.
  • a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by, or in connection with, the instruction execution system.
  • the computer readable medium can be, for example but not limited to, a system or propagation medium that is based on electronic, magnetic, optical,
  • a computer-readable medium using electronic technology would include (but are not limited to) the following: an electrical connection (electronic) having one or more wires; a random access memory (RAM); a read-only memory (ROM) ; an erasable programmable read-only memory (EPROM or Flash memory).
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • a specific example using magnetic technology includes (but is not limited to) a portable computer diskette.
  • Specific examples using optical technology include (but are not limited to) an optical fiber and a portable compact disk read-only memory (CD-ROM).

Abstract

Systems and methods for camera-based fingertip tracking are disclosed. One such method includes identifying at least one location of a fingertip in at least one of the video frames, and mapping the location to a user input based on the location of the fingertip relative to a virtual user input device.

Description

SYSTEMS AND METHODS OF CAMERA-BASED BODY-MOTION TRACKING CROSS REFERENCE TO RELATED APPLICATIONS
[001 ] This application claims priority to U.S. Serial. No. 61 /015,243, filed December 20,
2007, which is entirely incorporated herein by reference. This application is a continuation- in-part to U.S. Patent Application No. 13/494,975, which is a CI P to U.S. Patent Application No. 12/339,631 filed December 19, 2008, and issued as U.S. Patent No. 8,199,106 on June 12, 2012. Priority is claimed to such applications and are incorporated herein in their entirety.
TECHNICAL FIELD
[002] The present disclosure relates to visual tracking of objects, and more specifically, to camera-based tracking of fingertips.
BACKGROUND
[003] Conventional mechanisms for user input to computers include keyboards and pointing devices. Touch-sensing displays are also used, but these displays are expensive, and a large size touch-sensing display requires more exertion from the user to perform the same action. Furthermore, a touch-sensing display allows fingertip movement only on a flat surface, and cannot recognize finger tip movement in free space. Fingertip tracking in free space has been accomplished by the use of infrared, and through combination of lasers and steering mirrors. But these methods require complicated, expensive equipment and prove to be impractical. Methods for tracking fingers in free space with ordinary cameras have been developed, but fingers have less mobility and dexterity as compared to fingertips.
BRIEF DESCRIPTION OF THE DRAWINGS
[004] Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure.
[005] FIG. 1 depicts one embodiment of a system for camera-based fingertip tracking.
[006] FIG. 2 is a hardware block diagram of one embodiment of the computer from FIG. 1 . [007] FIG. 3 illustrates a pixel grid utilized by the fingertip tracking logic from FIG. 2.
[008] FIG. 4 is a flowchart of the operation in one embodiment of the fingertip tracking logic from FIG. 2.
[009] FIG. 5 illustrates the operation of the mapping block from FIG. 4.
[010] FIG. 6 depicts one example architecture which can be used to implement the fingertip tracking logic from FIG. 2.
[01 1 ] FIG. 7 depicts another example architecture which can be used to implement the fingertip tracking logic from FIG. 2.
[012] FIG.8 depicts an eyewear device embodiment that projects an image viewable to the user and which allows user input to interact with the image.
[013] FIG. 9 illustrates an alternative eyewear device embodiment in the form of a contact lens.
[014] FIG. 10 illustrates a system embodiment that utilizes a holographic image.
[015] FIG. 1 1 illustrates a system for translating sign language information.
[016] FIG. 12 depicts a flow chart of operation which can be used to translate sign language information.
[017] FIG. 13 depicts a flow chart of operation which can be used to manipulate a virtual object.
[018] FIG. 14 depicts a flow chart of operation for identifying and tracking body part information in video images.
SUMMARY
[019] Systems and methods for camera-based fingertip tracking are disclosed. One such method includes identifying at least one location of a fingertip in at least one of the video frames, and mapping the location to a user input based on the location of the fingertip relative to a virtual user input device. One such system includes a video camera, memory and a processor. The video camera is configured to produce a series of video frames. The memory is configured to store program code, and the processor is programmed by the program code to: examine at least one of the video frames to identify at least one location of a fingertip within the video frame; and map the location to a one of a plurality of
predetermined user inputs, based on the location of the fingertip relative to a virtual user input device.
DETAILED DESCRIPTION
[020] The embodiments disclosed herein provide systems and methods for camera-based fingertip tracking. In one such embodiment, a video camera captures motion of one or more fingertips in a series of video frames. Software analyzes the video frames to identify a fingertip location and/or movement in the video frames. Based on location of the fingertip relative to a virtual user input device, the software maps the fingertip to a virtual input such as a key or mouse location, and provides this input to an application.
[021 ] FIG. 1 depicts one embodiment of a system for camera-based fingertip tracking. System 100 includes a computer 1 10 in communication with a digital video camera 120. Video camera 120 captures digital images in its field of view, and fingertip tracking logic 250 (see FIG. 2) analyzes series of captured images to track the location and motion of fingertips. Logic 250 then maps the fingertip location and/or motion to data that is usable by an application running on computer 1 10. In some embodiments, logic 250 maps fingertip location and/or motion to a key, a combination of keys, a mouse click, or a mouse movement.
[022] In the example of FIG. 1 , video camera 120 is located near computer 1 10, with at least one fingertip 130 positioned in the field of view of video camera 120. However, other locations for video camera 120 are possible, as long as video camera 120 is in
communication with computer 1 10. In this example embodiment, video camera 120 is separate from computer 1 10, but in other embodiments video camera 120 is integrated with computer 1 10. [023] Also shown in FIG. 1 is a display 140 (optional). In some embodiments, computer 1 10 presents a keyboard representation on display 140. The user points to locations on the keyboard representation, fingertip tracking logic 250 tracks the location and motion of fingertips relative to this keyboard representation, then maps the location and/or motion to keyboard locations. In other embodiments, computer 1 10 projects a keyboard representation onto a surface. The user's fingers are not required to touch the display or the surface in these embodiments. Instead, the user's interaction is with the representation of the keyboard rather than a physical object, since fingertip tracking logic 250 detects fingertip location and tracks fingertip motion rather than touch or contact.
[024] FIG. 2 is a hardware block diagram of one embodiment of computer 1 10. Computer 1 10 contains a number of components that are familiar to a person of ordinary skill in the art. FIG. 2 omits a number of conventional components, known to those skilled in the art, that are not necessary to explain the operation of the computer.
[025] Components of computer 1 10 include a processor 210, memory 220, and storage 230 (e.g., hard disk, flash RAM, flash ROM, EEPROM, etc.). These components are coupled via a bus 240. Bus 240 may include multiple types of buses such as a dedicated memory bus, a combined memory-IO bus, and/or one or more peripheral buses (e.g., universal serial bus, IEEE 1394, etc.). The computer is also in communication with video camera 120. In the example of FIG. 2, video camera 120 is connected via bus 240. However, in other embodiments, computer 1 10 includes a network interface (not shown), and video camera 120 communicates with computer 1 10 over a network (e.g., Ethernet, WiFi, Internet, etc.).
[026] In some embodiments, fingertip tracking logic 250 is implemented as executable instructions stored in memory 220 and executed by processor 210. In other embodiments, fingertip tracking logic 250 is implemented in hardware logic (e.g., an application-specific integrated circuit).
[027] Logic 250 is described herein as residing within the same computer 1 10 which contains video camera 120. However, a person of ordinary skill in the art should appreciate that the functionality of these components could be distributed in other ways. The operation of fingertip tracking logic 250 will now be described in conjunction with FIGs. 3-6.
[028] Fingertip tracking logic 250 analyzes video frames captured from video camera 120, where each frame is processed as a pixel grid 310, shown in FIG. 3. At least one portion of the pixels within pixel grid 310 are identified by logic 250 as a fingertip 130 (see FIG. 1 ). In some embodiments, logic 250 identifies multiple fingertips 130, each corresponding to a different portion of pixel grid 310.
[029] FIG. 4 is a flowchart of the operation in one embodiment of fingertip tracking logic 250. A positioning stage (block 410) compares each pixel in a current frame with the same pixel in the previous frame. If the grayscale (or color, if color is used) of the pixel has changed significantly, the current pixel is then compared with other neighboring pixels. A pixel with changed grayscale that is also brighter than its surroundings it is considered to be a "target" pixel since fingertip 130 tends to be brighter than its surrounding area. Determining that a particular target pixel may correspond to fingertip 130 thus locates the potential fingertip's location in space. Various embodiments may track different numbers of fingertips.
[030] A tracking stage (block 420) tracks the motion of the target pixel by removing the background, averaging the grayscales of corresponding pixels in the two frames, and determining motion by subtracting the pixel grayscale from the second frame.
[031 ] Having processed two frames (current and previous) later frames are similarly analyzed, but with a focus on target pixels (block 430). Instead of all pixels, only target pixels are compared with the same pixel in the previous frame, then with neighboring pixels. If a target pixel once again meets the target criteria in these later frames, the target pixel then becomes a "confirmed" pixel, which most likely represents a fingertip.
[032] Motion of these "confirmed" pixels, corresponding to fingertips, are tracked (block 440). As in block 420, the tracking is performed by removing the background, averaging the grayscales of corresponding pixels in the two frames, and determining motion by subtracting the pixel grayscale from the second frame.
[033] Once fingertips have been identified via confirmed pixels, this embodiment of fingertip tracking logic 250 maps fingertips 130 from locations within pixel grid 310 to user input (block 450), and directs the user input to applications running on the computer 1 10 (block 460). The techniques described herein allow tracking of all fingertips independently and in free space, which permits greater flexibility and provides more information from gestures.
[034] FIG. 5 illustrates the operation of mapping block 450 in more detail. At least a portion of pixel grid 310 corresponds to a virtual keyboard 510. Each key 520 of virtual keyboard 510 corresponds to a set of pixel locations within grid 310. In the example of FIG. 5, the number "1 " key (310) is located at grid location [3,3]. This embodiment of fingertip tracking logic 250 therefore maps fingertip 130 to the "1 " key on keyboard 510 when that fingertip 130 is located at location grid location [3,3]. In some embodiments, a key mapping is not determined until a fingertip 130 has stayed in the same location for a specific period of time.
[035] Other fingertip positions are similarly mapped to other locations on virtual keyboard 510 The virtual keyboard concept can be extended to handle key sequences on a physical keyboard as virtual keys. For example, the Ctrl-X key sequence, which represents a "Cut" command in some user interfaces, can be represented as a single "Cut" key on virtual keyboard 510.
[036] Another example of mapping fingertip motion to user input treats fingertip motion as movement of a pointing device (e.g., mouse, trackball). Thus, the software converts motion of the index fingertip in free space to motion of the index finger on the pixel grid, which is mapped to moving a pointer on a virtual surface. One embodiment uses a special location on the grid to represent mouse actions such as a button click or double click, so that moving the index finger to this location acts as a click or double click. In some embodiments, the mouse action is not activated until a fingertip 130 has stayed in the same click or double click location for a specific period of time.
[037] FIG. 6 depicts one example architecture which can be used to implement fingertip tracking logic 250. Device drivers for particular input devices provide low-level events to an operating system 610. This example includes a mouse driver 620 and a keyboard driver 630, which provide mouse events 640 and key events 650 to operating system 610. Operating system 610 then provides user interface events 660, 670 to various applications 680 executing on computer 1 10. Typically, operating system 610 translates from low-level events to corresponding user interface events, though this translation may not be present in all embodiments. For example, the key sequence "Control-A" may be received by operating system 610 as two keystroke events, but translated into a single user interface event for the application.
[038] A camera driver 690 interfaces to the video camera 120 of FIG. 1 , and fingertip tracking logic 250 retrieves frames from camera driver 690. Logic 250 determines fingertip locations and/or motion, then translates location and motion information (relative to the frame or grid) into user input. (This operation was described earlier in connection with FIGs. 3-5.) This user input, in the form of mouse-specific or keyboard-specific data, is provided to mouse driver 620 and keyboard driver 630 (respectively). Thus, in this example architecture fingertip tracking logic 250 operates as a mouse and a keyboard from the perspective of the mouse and keyboard drivers. Some embodiments of fingertip tracking logic 250 may also use an application-specific mapping 695 (e.g., specific to a word processor, spreadsheet, etc.)
[039] FIG. 7 depicts another example architecture which can be used to implement fingertip tracking logic 250. This architecture is similar to that of FIG. 6, but fingertip tracking logic 250 itself operates as a mouse driver (by providing a mouse driver interface 710) and as a keyboard driver (by providing a keyboard driver interface 720). That is, fingertip tracking logic 250 provides the same interfaces to operating system 610 that mouse driver 620 and keyboard driver 630 do.
[040] Embodiments of fingertip tracking logic 250 can control robots from a remote location, which may be used in many different fields and applications. One example is medical diagnostic and surgical procedures. Another example is military applications, for example a targeting system. The techniques described herein can be extended through the use of multiple cameras to implement a three-dimensional tracking system, in which fingertip movement is tracked in three coordinate planes.
[041 ] In some embodiments, system 100 is located in an operating room and used by a person such as a surgeon. Such embodiments include a display which displays, in the field of view of video camera 120, the output of an application such as an image viewer. The user interacts with the image viewer and manipulates the displayed image (e.g., a diagnostic image of the patient) by pointing. Using the techniques described above, fingertip tracking logic 250 identifies the pointed-to portions of the image, and translates this to input provided to the image viewer. Other types of applications are also contemplated (e.g., web browser, file manager, etc.).
[042] Since the user controls the application without touching a solid surface, the sterility of the operating room is preserved. A variation on this embodiment utilizes gloves which include a marker (e.g., color, fluorescent, etc.) in the fingertip area. In this embodiment, positioning and tracking is simplified, while the mapping and injection of input into an application remains the same.
[043] In another embodiment, provided is an eyewear device 80 that is shown in FIG. 8. The eyewear device 80 displays an image field 85' to a user that is generated on lenses 85a, b. A virtual object 87' (such as a bat) is displayed in the image field, which is also generated on the lenses as 87a, b. This image may be provided in a 3D or 2D perspective. The eyewear device 80 includes a first camera 81 and an optional second camera 82 located on the corners of the frame. The eyewear device 80 is also shown with a processor 83 and power supply 84, which would make the remote and wireless. A user wearing the eyewear device 80 can manipulate the virtual object 87' by interaction with the user's hand(s), finger(s), and/or other portion of the user's body, such as according to the programming described herein.
[044] In a further embodiment as shown in FIG. 9, what is provided is a contact lens 90 that generates an image 95. The lens 90 also includes a camera 91 and power source 93. Alternatively, the contact lens is worn in conjunction with at least one camera positioned elsewhere on the user.
[045] According to a further embodiment, provided is a system shown in FIG. 10 that includes a holographic generator 106, that produces a holographic image field 105'. The image field 105' displays a virtual object 107', such as a heart as shown. The system includes at least one camera. As shown, the system includes a first and second camera 101 , 102 which can track the motions of a user's hand, finger(s), fingertips, or other portion of the user's body.
[046] According to another embodiment, provided is a sign language translator system. Using FIG. 1 1 as one reference, the system 1 10 involves the input of images from an information provider's body part (such as a hand, finger, and/or fingertip) 1 13 acquired through one or more cameras 1 1 1 , 1 12 and inputted into a computer 1 14. The computer 1 14 is programmed to process, typically by a processor 1 16, the images by correlating the input of information with a unit of information stored on a computer readable medium 1 15, either in the computer 1 14, or in communication with the computer 1 14. The stored information is conveyed to a receiver 1 17 in a form different from that provided by user 1 13. For example, the information conveyed to the receiver may be visual information pertaining to correlative image or words pertaining to the unit of information intended to be conveyed by the user 1 13. By way of example, the user may provide hand/finger/body part motion pertaining to a first sign language to convey a unit of information pertaining to an "apple". The output of information to the receiver may be a visual image of an apple, a spoken word "apple", tactile information (such as Braille) or images of hand/finger motion pertaining to a sign language word for "apple" but in a different sign language version. Sign languages have different versions just like written or spoken languages are different. For example, the English version sign language motion for "apple" may be processed to produce an image for a Spanish version sign language hand/finger/bodypart motion. Thereby, the embodiment provides the ability to translate one version of sign language into another version of sign language, and/or translate sign language into an image or written form understandable by one who does not necessarily know sign language.
[047] According to a specific embodiment, with reference to FIG. 12, a computer such as 1 14 from FIG. 1 1 , is programmed to 121 -receive images from a camera pertaining to motion of a provider's hand and/or fingers regarding a first unit of sign language information; 122- process the images to correlate the motion to a stored unit of information stored on a computer readable medium; and 123 output a translated unit of information to a receiver in a form different from the first unit of information. In a more specific embodiment, the computer 1 14 is programmed to identify from the images pixels corresponding to the hand and/or fingertip(s) and tracking motion of the hand/or fingertip(s) based on pixels in the images. For example, as illustrated in the block diagram of FIG. 14, the computer is programmed to 141 - identify target pixels corresponding to potential fingertips; 142-track motion of target pixels; 143-determine whether target pixels are confirmed pixels; and 144-track motion of confirmed pixels corresponding to fingertips. It is noted that reference to "fingertips" in FIG. 14 is intended to pertain to a hand, finger, fingers, fingertip and/or fingertips depending on the application.
[048] In a further embodiment, provided is a system of manipulating a virtual object. In reference to block diagram of FIG. 13, the system includes a computer programmed to 131 - actualize a virtual object to a user; 132-acquire a series of video frames from a camera; 133- identify in a video frame target pixels corresponding to a fingertip; 134-track motion of target pixels in a subsequent video frame; 135-determine whether target pixels are confirmed pixels pertaining to the fingertip; 136-track motion of confirmed pixels corresponding to fingertip; and 137-map fingertip position and/or motion to virtual object. It is noted that reference to "fingertip" in FIG. 14 is intended to be broad and may pertain to a hand, finger, fingers, fingertip, fingertips and/or other body part depending on the application. The actualizing of an image may be in the presentation of the image to a user via a screen, lens, holographic image, and the like.
[049] In another embodiment, a system is provided that processes image information, correlates the image information with a stored unit of information, and outputs the stored unit of information in a form different to that of the original inputted form. The output of information may take a number of different forms, including, but not limited to, aural, visual, electrical or neuroelectrical information interpretable by the receiver. The image information may be obtained at one location and transferred via a network or internet to another location for outputting. Processing of the information may be conducted at the location of image collection or information outputting or at a separate third location. Also, the image information may be stored for later processing. For example, the aspects of the system embodiment described in this paragraph is applied to system embodiments such as described and illustrated in reference to FIGs 1 1 -13
[050] The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Obvious modifications or variations are possible in light of the above teachings. The implementations discussed, however, were chosen and described to illustrate the principles of the disclosure and its practical application to thereby enable one of ordinary skill in the art to utilize the disclosure in various implementations and with various modifications as are suited to the particular use contemplated. All such modifications and variation are within the scope of the disclosure as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly and legally entitled. [051 ] Any process descriptions or blocks in flowcharts should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. As would be understood by those of ordinary skill in the art of the software development, alternate implementations are also included within the scope of the disclosure. In these alternate implementations, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
[052] The systems and methods disclosed herein can be embodied in any computer- readable medium for use by or in connection with an instruction execution system, apparatus, or device. Such instruction execution systems include any computer-based system, processor-containing system, or other system that can fetch and execute the instructions from the instruction execution system. In the context of this disclosure, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by, or in connection with, the instruction execution system. The computer readable medium can be, for example but not limited to, a system or propagation medium that is based on electronic, magnetic, optical,
electromagnetic, infrared, or semiconductor technology.
[053] Specific examples of a computer-readable medium using electronic technology would include (but are not limited to) the following: an electrical connection (electronic) having one or more wires; a random access memory (RAM); a read-only memory (ROM) ; an erasable programmable read-only memory (EPROM or Flash memory). A specific example using magnetic technology includes (but is not limited to) a portable computer diskette. Specific examples using optical technology include (but are not limited to) an optical fiber and a portable compact disk read-only memory (CD-ROM).

Claims

CLAIMS What is claimed is:
1 . A method of manipulating a virtual object in a series of video frames, the method comprising: identifying at least one location of a body part in at least one of the video frames; and mapping the location to the virtual object based on the location of the body part relative to the virtual object, wherein said identifying at least one location of a body part comprises identifying a target pixel in a first one of the video frames, the target pixel representing a potential body part location; tracking motion of the target pixel in video frames occurring after the first one; determining whether the target pixel is a confirmed pixel, the confirmed pixel representing a probable body part location; and tracking motion of the confirmed pixel.
2. The method of claim 1 , wherein said body part is a hand, finger, fingertip, arm, or leg, or a combination thereof.
3. The method of claim 1 , wherein said identifying further comprises identifying the at least location of a body part based on the presence of a particular color.
4. The method of claim 1 , where said identifying further comprises identifying the at least location of a body part based on the presence of fluorescence.
5. The method of claim 1 , wherein said body part is a hand, finger, fingertip, and/or arm.
6. A method of translating sign language, the method comprising
receiving a series of images from a camera pertaining to a motion of a provider's hand, finger, fingertip, and/or arm corresponding to a first unit of sign language information;
processing the images to correlate the motion to a stored unit of information stored on a computer readable medium; and
outputting an output unit of information to a receiver in a form different relative to the first unit of information.
7. The method of claim 6, wherein said outputting comprises providing an aural signal.
8. The method of claim 7, wherein said aural signal comprises speaking a word to which the first unit of information pertains.
9. The method of claim 6, wherein said outputting comprises providing a word in text form.
10. The method of claim 6, wherein said outputting comprises providing a word in tactile form.
1 1 . The method of claim 10, wherein said tactile form is Braille.
12. The method of claim 6, wherein said outputting comprises providing image information comprising motion of a hand, finger, fingertip, and/or arm pertaining to sign language of type different than said first unit of information.
13. The method of claim 12, wherein said different type of sign language is of a language different than that of said first unit of information.
14. The method of claim 6, wherein said outputting comprises displaying an visual image of an object to which the first unit of sign language information pertains.
15. A system for manipulating a virtual object, the system comprising an eyewear device comprising a lens that generates an image field that displays a virtual object; a camera that monitor's motion in said image field, and a computer programmed to track motion of a user in the image field and mapping the motion to said virtual object.
16. The system of claim 15, wherein said eyewear device comprises eyeglasses, and where said camera is affixed to said eyeglasses.
17. The system of claim 15, wherein said eyewear device is a contact lens.
18. A system for translating sign language, the system comprising a camera in communication with a computer, the computer comprising a computer readable media and being programmed to receive video image of a unit of sign language motion, correlating the sign language motion to unit of information stored on the computer readable media; and an output device for outputting an output unit of information, wherein the computer is programmed to output the output unit of information via said output device.
19. The system of claim 18, wherein said output device comprises a speaker, display, or Braille machine.
20. The system of claim 18, wherein a combination of said camera, computer and output device are located at the same location, or all are located at separate locations.
21 . The system of claim 20, wherein said camera, and optionally said computer, are at a location different than said output device, wherein information is transferred between the camera and output device occurs via a network.
22. The system of claim 21 , wherein said network is an internet.
PCT/US2013/045433 2012-06-12 2013-06-12 Systems and methods of camera-based body-motion tracking WO2014007955A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA2875815A CA2875815A1 (en) 2012-06-12 2013-06-12 Systems and methods of camera-based body-motion tracking
KR1020147037139A KR20150017370A (en) 2012-06-12 2013-06-12 Systems and methods of camera-based body-motion tracking
EP13812975.4A EP2859532A4 (en) 2012-06-12 2013-06-12 Systems and methods of camera-based body-motion tracking

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/494,975 2012-06-12
US13/494,975 US9001036B2 (en) 2007-12-20 2012-06-12 Systems and methods of camera-based fingertip tracking

Publications (1)

Publication Number Publication Date
WO2014007955A1 true WO2014007955A1 (en) 2014-01-09

Family

ID=49882417

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/045433 WO2014007955A1 (en) 2012-06-12 2013-06-12 Systems and methods of camera-based body-motion tracking

Country Status (4)

Country Link
EP (1) EP2859532A4 (en)
KR (1) KR20150017370A (en)
CA (1) CA2875815A1 (en)
WO (1) WO2014007955A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6147678A (en) * 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
US20020024675A1 (en) * 2000-01-28 2002-02-28 Eric Foxlin Self-referenced tracking
US20030052963A1 (en) * 2001-09-18 2003-03-20 Stephen Junkins Camera driven virtual workspace management
US20030137494A1 (en) * 2000-05-01 2003-07-24 Tulbert David J. Human-machine interface
US20070120996A1 (en) * 2005-11-28 2007-05-31 Navisense, Llc Method and device for touchless control of a camera

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7333507B2 (en) * 2001-08-31 2008-02-19 Philip Bravin Multi modal communications system
US8199106B2 (en) * 2007-12-20 2012-06-12 University Of Central Florida Research Foundation, Inc. Systems and methods of camera-based fingertip tracking
US20100299150A1 (en) * 2009-05-22 2010-11-25 Fein Gene S Language Translation System
US8751215B2 (en) * 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6147678A (en) * 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
US20020024675A1 (en) * 2000-01-28 2002-02-28 Eric Foxlin Self-referenced tracking
US20030137494A1 (en) * 2000-05-01 2003-07-24 Tulbert David J. Human-machine interface
US20030052963A1 (en) * 2001-09-18 2003-03-20 Stephen Junkins Camera driven virtual workspace management
US20070120996A1 (en) * 2005-11-28 2007-05-31 Navisense, Llc Method and device for touchless control of a camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2859532A4 *

Also Published As

Publication number Publication date
CA2875815A1 (en) 2014-01-09
EP2859532A4 (en) 2016-06-15
EP2859532A1 (en) 2015-04-15
KR20150017370A (en) 2015-02-16

Similar Documents

Publication Publication Date Title
US9791938B2 (en) System and methods of camera-based fingertip tracking
US8199106B2 (en) Systems and methods of camera-based fingertip tracking
Carmigniani et al. Augmented reality: an overview
Petersen et al. Cognitive augmented reality
KR101652535B1 (en) Gesture-based control system for vehicle interfaces
US20130154913A1 (en) Systems and methods for a gaze and gesture interface
US20090322671A1 (en) Touch screen augmented reality system and method
JP2015531526A (en) Touchless input
US20110022950A1 (en) Apparatus to create, save and format text documents using gaze control and method associated based on the optimized positioning of cursor
CN104714638A (en) Medical technology controller
CN110568929B (en) Virtual scene interaction method and device based on virtual keyboard and electronic equipment
Ebert et al. Out of touch–A plugin for controlling OsiriX with gestures using the leap controller
CN104460967A (en) Recognition method of upper limb bone gestures of human body
CN113190109A (en) Input control method and device of head-mounted display equipment and head-mounted display equipment
Riduwan et al. Finger-based gestural interaction for exploration of 3D heart visualization
CN111443854B (en) Action processing method, device and equipment based on digital person and storage medium
Lapointe et al. A literature review of AR-based remote guidance tasks with user studies
US11009963B2 (en) Sign language inputs to a vehicle user interface
Ismail et al. Vision-based technique and issues for multimodal interaction in augmented reality
CN112424736A (en) Machine interaction
WO2014007955A1 (en) Systems and methods of camera-based body-motion tracking
KR20150044243A (en) Electronic learning apparatus and method for controlling contents by hand avatar
Gepner et al. Gaze as a supplementary modality for interacting with ambient intelligence environments
Zhang et al. A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
Heinrich et al. Interacting with medical volume data in projective augmented reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13812975

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2875815

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20147037139

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2013812975

Country of ref document: EP