Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.


  1. Advanced Patent Search
Publication numberUS20100134612 A1
Publication typeApplication
Application numberUS 12/700,055
Publication date3 Jun 2010
Filing date4 Feb 2010
Priority date22 Aug 1997
Also published asUS6750848, US8553079, US8723801, US20050012720, US20130169535, US20130222252, US20140313125
Publication number12700055, 700055, US 2010/0134612 A1, US 2010/134612 A1, US 20100134612 A1, US 20100134612A1, US 2010134612 A1, US 2010134612A1, US-A1-20100134612, US-A1-2010134612, US2010/0134612A1, US2010/134612A1, US20100134612 A1, US20100134612A1, US2010134612 A1, US2010134612A1
InventorsTimothy Pryor, Peter H. Smith
Original AssigneeTimothy Pryor, Smith Peter H
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method for enhancing well-being of a small child or baby
US 20100134612 A1
A method for enhancing a well-being of a small child or baby utilizes at least one TV camera positioned to observe one or more points on the child or an object associated with the child. Signals from the TV camera are outputted to a computer, which analyzes the output signals to determine a position or movement of the child or child associated object. The determined position or movement is then compared to pre-programmed criteria in the computer to determine a correlation or importance, and thereby to provide data to the child.
Previous page
Next page
1. A method for enhancing a well-being of a small child or baby, comprising the steps of:
positioning at least one TV camera to observe one or more points on the child or an object associated with the child;
outputting signals from the TV camera to a computer;
analyzing the output signals of the TV camera with the computer to determine a position or movement of the child or child associated object;
comparing the determined position or movement to pre-programmed criteria in the computer to determine a correlation; and
providing data to the child based on the determined correlation.
2. A method according to claim 1, including the additional step of recording data of the position or movement of the child for subsequent analysis.
3. A method according to claim 2, wherein the recorded data is used to diagnose problems of the child.
4. A method according to claim 1, wherein infra-red illumination of the child is used.
5. A method according to claim 1, wherein the point is on the clothing or an article of play of the child.
6. A method according to claim 1, wherein the data provided to the child is audio data.
7. A method according to claim 1, wherein the data provided to the child is visual data.
8. A method according to claim 1, wherein the data provided to said child concerns one or more of the parents of the child.
9. A method according to claim 1, including the additional step of providing high contrast datums on said child or object which may be more easily seen by said one or more TV cameras.
10. A method according to claim 1, wherein the data is designed to elicit a response from the child which can be determined using the TV camera.
11. A method according to claim 1, wherein the data is designed to improve an intelligence of the child.
12. A method for enhancing a well-being of a small child or baby, comprising the steps of:
positioning at least one TV camera to observe one or more points on the child or an object associated with the child;
outputting signals from the TV camera to a computer system;
analyzing the output signals of the TV camera with the computer system to determine position or movement data of the child or child associated object;
recording the position or movement data for analysis; and
using the recorded data, enhancing a well-being of the child.
13. A method according to claim 12, wherein the data is used to determine potential problems of the child.
14. A method according to claim 12, wherein the data is designed to elicit a response from the child which can be determined using the TV camera.
15. A method according to claim 12, wherein the data is used to determine appropriate responses to the child.
16. A method for enhancing a well-being of a small child or baby, comprising the steps of:
positioning at least one TV camera to observe one or more points on the child or an object associated with the child;
outputting signals from the TV camera to a computer;
analyzing the output signals of the TV camera with the computer to determine a position or movement of the child or child associated object;
comparing the determined position or movement to pre-programmed criteria in the computer; and
if certain criteria are determined to have been met, transmitting an image of the child to a remote monitor for observation.
17. A method according to claim 16, including the further step of transmitting other data relating to the child.
  • [0001]
    This application is a continuation of U.S. application Ser. No. 10/866,191, filed Jun. 14, 2004; which is a continuation of U.S. application Ser. No. 09/433,297, filed Nov. 3, 1999; which claims benefit of U.S. Provisional Application No. 60/107,652, filed Nov. 9, 1998. This application is also a continuation-in-part of U.S. application Ser. No. 09/138,339 filed Aug. 21, 1998, now abandoned; which claims benefit of U.S. Provisional Application No. 60/056,639 filed Aug. 22, 1997. This application further claims benefit of U.S. Provisional Application No. 60/059,561 filed Sep. 19, 1998. These applications are hereby incorporated by reference.
  • [0000]
    • 1. Man Machine Interfaces: Ser. No. 08/290,516, filed Aug. 15, 1994, and now U.S. Pat. No. 6,008,800.
    • 2. Touch TV and Other Man Machine Interfaces: Ser. No. 08/496,908, filed Jun. 29, 1995, and now U.S. Pat. No. 5,982,352.
    • 3. Systems for Occupant Position Sensing: Ser. No. 08/968,114, filed Nov. 12, 1997, now abandoned [which claims benefit of 60/031,256, filed Nov. 12, 1996].
    • 4. Target holes and corners: U.S. Ser. No. 08/203,603, filed Feb. 28, 1994, and 08/468,358 filed Jun. 6, 1995, now U.S. Pat. No. 5,956,417 and U.S. Pat. No. 6,044,183.
    • 5. Vision Target Based Assembly: U.S. Ser. No. 08/469,429, filed Jun. 6, 1995, now abandoned; 08/469,907, filed Jun. 6, 1995, now U.S. Pat. No. 6,301,763; 08/470,325, filed Jun. 6, 1995, now abandoned; and 08/466,294, filed Jun. 6, 1995, now abandoned.
    • 6. Picture Taking Method and Apparatus: Provisional Application No. 60/133,671, filed May 11, 1998.
    • 7. Methods and Apparatus for Man Machine Interfaces and Related Activity: Provisional Application No. 60/133,673 filed May 11, 1998.
    • 8. Camera Based Man-Machine Interfaces: Provisional Patent Application no. 60/142,777, filed Jul. 8, 1999.
  • [0010]
    The copies of the disclosure of the above referenced applications are incorporated herein by reference.
  • [0011]
    not applicable
  • [0012]
    1. Field of the Invention
  • [0013]
    The invention relates to simple input devices for computers, particularly, but not necessarily, intended for use with 3-D graphically intensive activities, and operating by optically sensing object or human positions and/or orientations. The invention in many preferred embodiments, uses real time stereo photogrammetry using single or multiple TV cameras whose output is analyzed and used as input to a personal computer, typically to gather data concerning the 3D location of parts of, or objects held by, a person or persons.
  • [0014]
    This continuation application seeks to provide further detail on useful embodiments for computing. One embodiment is a keyboard for a laptop computer (or stand alone keyboard for any computer) that incorporates digital TV cameras to look at points on, typically, the hand or the finger, or objects held in the hand of the user, which are used to input data to the computer. It may also or alternatively, look at the head of the user as well.
  • [0015]
    Both hands or multiple fingers of each hand, or an object in one hand and fingers of the other can be simultaneously observed, as can alternate arrangements as desired.
  • [0016]
    2. Description of Related Art
  • [0017]
    My referenced co-pending applications incorporated herein by reference discuss many prior art references in various pertinent fields, which form a background for this invention.
  • [0018]
    FIG. 1 illustrates a lap top or other computer keyboard with cameras according to the invention located on the keyboard surface to observe objects such as fingers and hands overhead of the keyboard.
  • [0019]
    FIG. 2 illustrates another keyboard embodiment using special datums or light sources such as LEDs.
  • [0020]
    FIG. 3 illustrates a further finger detection system for laptop or other computer input.
  • [0021]
    FIG. 4 illustrates learning, amusement, monitoring, and diagnostic methods and devices for the crib, playpen and the like.
  • [0022]
    FIG. 5 illustrates a puzzle toy for young children having cut out wood characters according to the invention.
  • [0023]
    FIG. 6 illustrates an improved handheld computer embodiment of the invention, in which the camera or cameras may be used to look at objects, screens and the like as well as look at the user along the lines of FIG. 1.
  • [0024]
    FIG. 7 illustrates new methods for internet commerce and other activities involving remote operation with 3D virtual objects display.
  • [0025]
    A laptop (or other) computer keyboard based embodiment is shown in FIG. 1. In this case, a stereo pair of cameras 100 and 101 located on each side of the keyboard are used, desirably having cover windows 103 and 104 mounted flush with the keyboard surface 102. The cameras are preferably pointed obliquely inward at angles Φ toward the center of the desired work volume 170 above the keyboard. In the case of cameras mounted at the rear of the keyboard (toward the display screen), these cameras are also inclined to point toward the user at an angle as well.
  • [0026]
    Alternate camera locations may be used such as the positions of cameras 105 and 106, on upper corners of screen housing 107 looking down at the top of the fingers (or hands, or objects in hand or in front of the cameras), or of cameras 108 and 109 shown.
  • [0027]
    One of the referenced embodiments of the invention is to determine the pointing direction vector 160 of the users finger (for example pointing at an object displayed on screen 107), or the position and orientation of an object held by the user. Alternatively, finger position data can be used to determine gestures such as pinch or grip, and other examples of relative juxtaposition of objects with respect to each other, as has been described in co-pending referenced applications. Positioning of an object or portions (such as hands or fingers of a doll) is also of use, though more for use with larger keyboards and displays.
  • [0028]
    In one embodiment, shown in FIG. 2, cameras such as 100/101 are used to simply look at the tip of a finger 201 (or thumb) of the user, or an object such as a ring 208 on the finger. Light from below, such as provided by single central light 122 can be used to illuminate the finger that typically looks bright under such illumination. It is also noted that the illumination is directed or concentrated in an area where the finger is typically located such as in work volume 170. If the light is of sufficient spectral content, the natural flesh tone of the finger can be observed—and recognized—by use of the color TV cameras 100/101.
  • [0029]
    As is typically the case, the region of the overlapping cameras viewing area is relatively isolated to the overlapping volumetric zone of their fields 170 shown due to focal lengths of their lenses and the angulation of the camera axes with respect to each other. This restricted overlap zone, helps mitigate against unwanted matches in the two images due to information generated outside the zone of overlap. Thus there are no significant image matches found of other objects in the room, since the only flesh toned object in the zone is typically the finger or fingers of the user. Or alternatively, for example, the users hand or hands. Similarly objects or targets thereon can be distinguished by special colors or shapes.
  • [0030]
    If desired, or required, Motion of the fingers can be also used to further distinguish their presence vis-a-vis any static background. If for example by subtraction of successive camera frames, the image of a particular object is determined to have moved it is determined that this is likely the object of potential interest which can be further analyzed directly to determine if is the object of interest.
  • [0031]
    In case of obscuration of the fingers or objects in the hand, cameras in additional locations such as those mentioned above, can be used to solve for position if the view of one or more cameras is obscured.
  • [0032]
    The use of cameras mounted on both the screen and the keyboard allows one to deal with obscurations that may occur and certain objects may or may not be advantageously delineated in one view or the other.
  • [0033]
    In addition, it may be, in many cases, desirable to have a datum on the top of the finger as opposed to the bottom because on the bottom, it can get in the way of certain activities. In this case the sensors are required on the screen looking downward or in some other location such as off the computer entirely and located overhead has been noted in previous application.
  • [0034]
    To determine finger location, a front end processor like that described in the target holes and corners co-pending application reference incorporated U.S. Ser. No. 08/203,603, and 08/468,358 can be used, to also allow the finger shape as well as color to be detected.
  • [0035]
    Finger gestures comprising a sequence of finger movements can also be detected, by analyzing sequential image sets such at the motion of the finger, or one finger with respect to another such as in pinching something can be determined. Cameras 100 and 101 have been shown at the rear of the keyboard near the screen or at the front. They may mount in the middle of the keyboard or any other advantageous location.
  • [0036]
    The cameras can also see ones fingers directly, to allow typing as now, but without the physical keys. One can type in space above the plane of the keyboard (or in this case plane of the cameras), this is useful for those applications where the keyboard of conventional style is too big (e.g. the hand held computer of FIG. 6).
  • FIG. 2
  • [0037]
    It is also desirable for fast reliable operation to use retro-reflective materials and other materials to augment the contrast of objects used in the application. For example, a line target such as 200 can be worn on a finger 201, and advantageously can be located if desired between two joints of the finger as shown. This allows the tip of the finger to be used to type on the keyboard without feeling unusual—the case perhaps with target material on tip of the finger.
  • [0038]
    The line image detected by the camera can be provided also by a cylinder such as retroreflective cylinder 208 worn on the finger 201 which effectively becomes a line image in the field of view of each camera, (assuming each camera is equipped with a sufficiently coaxial light source, typically one or more LEDs such as 210 and 211), can be used to solve easily using the line image pairs with the stereo cameras for the pointing direction of the finger that is often a desired result. The line, in the stereo pair of images provides the 3D pointing direction of the finger, for example pointing at an object displayed on the screen 140 of the laptop computer 138.
  • FIG. 3
  • [0039]
    It is also possible to have light sources on the finger that can be utilized such as the 2 LED light sources shown in FIG. 3. This can be used with either TV camera type sensors or with PSD type analog image position sensors as disclosed in references incorporated.
  • [0040]
    In particular the ring mounted LED light sources 301 and 302 can be modulated at different frequencies that can be individually discerned by sensors imaging the sources on to a respective PSD detector. Alternatively, the sources can simply be turned on and off at different times such that the position of each point can be independently found allowing the pointing direction to be calculated from the LED point data gathered by the stereo pair of PSD based sensors.
  • [0041]
    The “natural interface keyboard” here described can have camera or other sensors located at the rear looking obliquely outward toward the front as well as inward so as to have their working volume overlap in the middle of the keyboard such as the nearly full volume over the keyboard area is accommodated.
  • [0042]
    Clearly larger keyboards can have a larger working volume than one might have on a laptop. The pair of sensors used can be augmented with other sensors mounted on the screen housing. It is noted that the linked dimension afforded for calibration between the sensors located on the screen and those on the keyboard is provided by the laptop unitary construction.
  • [0043]
    One can use angle sensing means such as a rotary encoder for the lap top screen tilt. Alternatively, cameras located on the screen can be used to image reference points on the keyboard as reference points to achieve this. This allows the calibration of the sensors mounted fixedly with respect to the screen with respect to the sensors and keyboard space below. It also allows one to use stereo pairs of sensors that are not in the horizontal direction (such as 101/102) but could for example be a camera sensor such as 100 on the keyboard coupled with one on the screen, such as 106
  • [0044]
    Knowing the pointing angles of the two cameras with respect to one another allows one to solve for the 3 d location of objects from the matching of the object image positions in the respective camera fields.
  • [0045]
    As noted previously, it is also of interest to locate a line or cylinder type target on the finger between the first and second joints. This allows one to use the fingertip for the keyboard activity but by raising the finger up, it can be used as a line target capable of solving for the pointed direction for example.
  • [0046]
    Alternatively one can use two point targets on the finger such as either retroreflective datums, colored datums such as rings or LED light sources that can also be used with PSD detectors which has also been noted in FIG. 2.
  • [0047]
    When using the cameras located for the purpose of stereo determination of the position of the fingers from their flesh tone images it is useful to follow the preprocessing capable of processing data obtained from the cameras in order to look for the finger. This can be done on both color basis and on the basis of shape as well as motion.
  • [0048]
    In this invention, I have shown the use of not only cameras located on a screen looking downward or outward from the screen, but also cameras that can be used instead of or in combination with those on the screen placed essentially on the member on which the keyboard is incorporated. This allows essentially the keyboard to mounted cameras which are preferably mounted flush with the keyboard surface to be unobtrusive, and yet visually be able to see the users fingers, hands or objects held by the user and in some cases, the face of the user.
  • [0049]
    This arrangement is also useful for 3D displays, for example where special synchronized glasses (e.g. the “Crystal Eyes” brand often used with Silicon Graphics work stations) are used to alternatively present right and left images to each eye. In this case the object may appear to be actually in the workspace 170 above the keyboard, and it may be manipulated by virtually grasping (pushing, pulling, etc.) it, as has been described in co-pending applications
  • FIG. 4 Baby Learning and Monitoring System
  • [0050]
    A baby's reaction to the mother (or father) and the mother's analysis of the baby's reaction is very important. There are many gestures of babies apparently indicated in child psychology as being quite indicative of various needs, wants, or feelings and emotions, etc. These gestures are typically made with the baby's hands.
  • [0051]
    Today this is done and learned entirely by the mother being with the baby. However with a Electro-optical sensor based computer system, such as that described in co-pending applications, located proximate to or even in the crib (for example), one can have the child's reactions recorded, not just in the sense of a video tape which would be too long and involved for most to use, but also in terms of the actual motions which could be computer recorded and analyzed also with the help of the mother as to what the baby's responses were. And such motions, combined with other audio and visual data can be very important to the baby's health, safety, and learning.
  • [0052]
    Consider for example crib 400 with computer 408 having LCD monitor 410 and speaker 411 and camera system (single or stereo) 420 as shown, able to amuse or inform baby 430, while at the same time recording (both visually, aurally, and in movement detected position data concerning parts of his body or objects such as rattles in his hand) his responses for any or all of the purposes of diagnosis of his state of being, remote transmission of his state, cues to various programs or images to display to him or broadcast to others, or the like.
  • [0053]
    For one example, baby's motions could be used to signal a response from the TV either in the absence of the mother or with the mother watching on a remote channel. This can even be over the Internet if the mother is at work.
  • [0054]
    For example, a comforting message could come up on the TV from the mother that could be prerecorded (or alternatively could actually be live with TV cameras in the mothers or fathers work place for example on a computer used by the parent) to tell the baby something reassuring or comfort the baby or whatever. Indeed the parent can be monitored using the invention and indicate something back or even control a teleoperater robotic device to give a small child something to eat or drink for example. The same applies to a disabled person.
  • [0055]
    If the father or mother came up on the screen, the baby could wave at it, move its head or “talk” to it but the hand gestures may be the most important.
  • [0056]
    If the mother knows what the baby is after, she can talk to baby or say something, or show something that the baby recognizes such as a doll. After a while, looking at this live one can then move to talking to the baby from some prerecorded data.
  • [0057]
    What other things might we suppose? The baby for example knows to puts its hand on the mother's cheek to cause the mother to turn to it. The baby also learns some other reflexes when it is very young that it forgets when it gets older. Many of these reflexes are hand movements, and are important in communicating with the remote TV based mother representation, whether real via telepresense or from CD Rom or DVD disk (or other media, including information transmitted to the computer from afar) and for the learning of the baby's actions.
  • [0058]
    Certainly just from the making the baby feel good point-of-view, it would seem like certain motherly (or fatherly, etc) responses to certain baby actions in the form of words and images would be useful. This stops short of physical holding of the baby which is often needed, but could act as a stop gap to allow the parents to get another hour's sleep for example.
  • [0059]
    As far as the baby touching things, I've discussed in other applications methods for realistic touch combined with images. This leads to a new form of touching crib mobiles that could contain video imaged and or be imaged themselves—plus if desired, touched in ways that would be far beyond any response that you could get from a normal mobile.
  • [0060]
    For example, let us say there is a targeted (or otherwise TV observable) mobile 450 in the crib above the baby. Baby reaches up and touches a piece of the mobile which is sensed by the TV camera system (either from the baby's hand position, the mobile movement, or both, and a certain sound is called up by the computer, a musical note for example. Another piece of the mobile and another musical note. The mobile becomes a musical instrument for the baby that could play either notes or chords or complete passages, or any other desired programmed function.
  • [0061]
    The baby can also signal things. The baby can signal using agitated movements would often mean that it's unhappy. This could be interpreted using learned movement signatures and artificial intelligence as needed by the computer to call for mother even if the baby wasn't crying. If the baby cries, that can be picked up by microphone 440, recognized using a voice recognition system along the lines of that used in IBM Via Voice commercial product for example. And even the degree of crying can be analyzed to determine appropriate action.
  • [0062]
    The computer could also be used to transmit information of this sort via the internet email to the mother who could even be at work. And until help arrives in the form of mother intervention or whatever, the computer could access a program that could display on a screen for the baby things that the baby likes and could try to soothe the baby through either images of familiar things, music or whatever. This could be useful at night when parents need sleep, and any thing that would make the baby feel more comfortable would help the parents.
  • [0063]
    It could also be used to allow the baby to input to the device. For example, if the baby was hungry, a picture of the bottle could be brought up on the screen. The baby then could yell for the bottle. Or if the baby needed his diaper changed, perhaps something reminiscent of that. If the baby reacts to such suggestions of his problem, This gives a lot more intelligence as to why he is crying and while mothers can generally tell right away, not every one else can. In other words, this is pretty neat for babysitters and other members of the household so they can act more intelligently on the signals the baby is providing.
  • [0064]
    Besides in the crib, the system as described can be used in conjunction with a playpen, hi chair or other place of baby activity.
  • [0065]
    As the child gets older, the invention can further be used also with more advanced activity with toys, and to take data from toy positions as well. For example, blocks, dolls, little cars, and moving toys even such as Trikes, Scooters, drivable toy cars and bikes with training wheels
  • [0066]
    The following figure illustrates the ability of the invention to learn, and thus to assist in the creation of toys and other things.
  • FIG. 5 Learning Puzzle Toy
  • [0067]
    Disclosed in FIG. 5 is a puzzle toy 500 where woodcut animals such as bear 505 and lion 510 are pulled out with handle such as 511. The child can show the animal to the camera and a computer 530 with TV camera (or cameras) 535 can recognize the shape as the animal, and provide a suitable image and sounds on screen 540.
  • [0068]
    Alternatively, and more simply, a target, or targets on the back of the animal can be used such as triangle 550 on the back of lion 511. In either case the camera can solve for the 3D, and even 5 or 6D position and orientation of the animal object, and cause it to move accordingly on the screen, as the child maneuvers it. The child can hold two animals, one in each hand and they can each be detected, even with a single camera, and be programmed in software to interact as the child wishes.(or as he learns the program)
  • [0069]
    This is clearly for very young children of two or three years of age. The toys have to be large so they can't be swallowed.
  • [0070]
    With the invention in this manner, one can make a toy of virtually anything, for example a block. Just hold this block up, teach the computer/camera system the object and play using any program you might want to represent it and its actions. To make this block known to the system, the shape of the block, the color of the block or some code on the block can be determined. Any of those items could tell the camera which block it was, and most could give position and orientation if known.
  • [0071]
    At that point, an image is called up from the computer representing that particular animal or whatever else the block is supposed to represent. Of course this can be changed in the computer to be a variety of things if this is something that is acceptable to the child. It could certainly be changed in size such as a small lion could grow into a large lion. The child could probably absorb that more than a lion changing into a giraffe for example since the block wouldn't correspond to that. The child can program or teach the system any of his blocks to be the animal he wants and that might be fun.
  • [0072]
    For example, he or the child's parent could program a square to be a giraffe where as a triangle would be a lion. Maybe this could be an interesting way to get the child to learn his geometric shapes!
  • [0073]
    Now the basic block held up in front of the camera system could be looked at just for what it is. As the child may move the thing toward or away from the camera system, one may get a rough sense of depth from the change in shape of the object. However this is not so easy as the object changes in shape due to any sort of rotations.
  • [0074]
    Particularly interesting then is to also sense the rotations if the object so that the animal can actually move realistically in 3 Dimensions on the screen. And perhaps having the de-tuning of the shape of the movement so that the child's relatively jerky movements would not appear jerky on the screen or would not look so accentuated. Conversely of course, you can go the other way and accentuate the motions.
  • [0075]
    This can, for example, be done with a line target around the edge of the object is often useful for providing position or orientation information to the TV camera based analysis software, and in making the object easier to see in reflective illumination.
  • Aid to Speech Recognition
  • [0076]
    The previous co-pending application entitled “Useful man machine interfaces and applications” referenced above, discussed the use of persons movements or positions to aid in recognizing the voice spoken by the person.
  • [0077]
    In one instance, this can be achieved by simply using ones hand to indicate to the camera system of the computer that the voice recognition should start (or stop, or any other function, such as a paragraph or sentence end etc).
  • [0078]
    Another example is to use the camera system of the invention to determine the location of the persons head (or other part), from which one can instruct a computer to preferentially evaluate the sound field in phase and amplitude of two or more spaced microphones to listen from that location—thus aiding the pickup of speech, which often times is not able to be heard well enough for computer based automatic speech recognition to occur.
  • Digital Interactive TV
  • [0079]
    As you watch TV, data can be taken from the camera system of the invention and transmitted back to the source of programming. This could include voting on a given proposition by raising your hand for example, with your hand indication transmitted. Or you could hold up 3 fingers, and the count of fingers transmitted. Or in a more extreme case, your position, or the position of an object or portion thereof could be transmitted—for example you could buy a coded object, whose code would be transmitted to indicate that you personally (having been pre-registered) had transmitted a certain packet of data.
  • [0080]
    If the programming source can transmit individually to you (not possible today, but forecast for the future), then much more is possible. The actual image and voice can respond using the invention to positions and orientations of persons or objects in the room—just as in the case of prerecorded data, or one to one internet connections. This allows group activity as well.
  • [0081]
    In the extreme case, full video is transmitted in both directions and total interaction of users and programming sources and each other becomes possible.
  • [0082]
    An interim possibility using the invention is to have a program broadcast to many, which shifts to prerecorded DVD disc or the like driving a local image, say when your hand input causes a signal to be activated.
  • Handwriting Authentication
  • [0083]
    A referenced co-pending application illustrated the use of the invention to track the position of a pencil in three dimensional space such that the point at which the user intends the writing point to be at, can be identified and therefore used to input information, such as the intended script.
  • [0084]
    As herein disclosed, this part of the invention can also be used for the purpose of determining whether or not a given person's handwriting or signature is correct.
  • [0085]
    For example, consider authentication of an Internet commercial transaction. In this case, the user simply writes his name or address and the invention is used to look at the movements of his writing instrument and determine from that whether or not the signature is authentic. (A movement of one or more of his body parts might also or alternatively be employed). For example a series of frames of datum location on his pen can be taken, to determine one or more positions on it as a function of time, even to include calculating of its pointing direction, from a determined knowledge in three axes of two points along the line of the pen axis. In this case a particular pointing vector sequence “signature” would be learned for this person, and compared to later signatures.
  • [0086]
    What is anticipated here is that in order to add what you might call the confirming degree of authenticity to the signature, it may not be necessary to track the signature completely. Rather one might only determine that certain aspects of the movement of the pencil are the authentic ones. One could have people write using any kind of movement, not just their signature having their name. The fact is that people are mostly used to writing their name and it would be assumed that that would be it. However, it could well be that the computer asks the user to write something else that they would then write and that particular thing would be stored in the memory.
  • [0087]
    Optionally, one's voice could be recognized in conjunction with the motion signature to add further confirmation.
  • [0088]
    This type of ability for the computer system at the other end of the Internet to query a writer to write a specific thing in a random fashion adds a degree of cryptographic capacity to the invention. In other words, if I can store the movements in my hand to write different things, then clearly this has some value
  • [0089]
    The important thing though is that some sort of representation of the movements of the pencil or other instrument can be detected using the invention and transmitted.
  • FIG. 6 Hand Held Computer
  • [0090]
    FIG. 6 illustrates an improved handheld computer embodiment of the invention For example, FIG. 8 of the provisional application referenced above entitled “camera based man machine interfaces and applications” illustrates a basic hand held device and which is a phone, or a computer or a combination thereof, or alternatively to being hand held, can be a wearable computer for example on ones wrist.
  • [0091]
    In this embodiment, we further disclose the use of this device as a computer, with a major improvement being the incorporation of a camera of the device optionally in a position to look at the user, or an object held by the user—along the lines of FIG. 1 of the instant disclosure, for example
  • [0092]
    Consider hand held computer 901 of FIG. 6, incorporating a camera 902 which can optionally be rotated about axis 905 so as to look at the user or a portion thereof such as finger 906, or at objects at which it is pointed. Optionally, and often desirably, a stereo pair of cameras to further include camera 910 can also be used. It too may rotate, as desired. Alternatively fixed cameras can be used as in FIG. 1, and FIG. 8 of the referenced co-pending application, when physical rotation is not desired, for ruggedness, ease of use, or other reasons (noting that fixed cameras have fixed fields of view, which limit versatility in some cases).
  • [0093]
    When aimed at the user, as shown, it can be used, for example, to view and obtain images of:
  • [0094]
    Ones self—facial expression etc, also for image reasons—id etc, combined effect.
  • [0095]
    Ones fingers (any or all), one finger to other and the like. This in turn allows conversing with the computer in a form of sign language which can replace the keyboard of a conventional computer.
  • [0096]
    One or more Objects in ones hand. Includes a pencil or pen—and thus can be used rather than having a special touch screen and pencil if the pencil itself is tracked as disclosed in the above figure. It also allows small children to use the device, and those who cannot hold an ordinary stylus
  • [0097]
    Ones gestures
  • [0098]
    The camera 902 (and 910 if used, and if desired), can also be optionally rotated and used to view points in space ahead of the device, as shown in dotted lines 902 a. In this position for example it can be used for the purposes described in the previous application. It can also be used to observe or point at (using optional laser pointer 930) Points such as 935 on a wall, or a mounted LCD or projection display such as 940 on a wall or elsewhere such as on the back of an airline seat.
  • [0099]
    With this feature of the invention, there is no requirement to carry a computer display with you as with a infrared connection (not shown) such as known in the art one can also transmit all normal control information to the display control computer 951. as displays become ubiquitous, this makes increasing sense—other wise the displays get bigger the computers smaller trend doesn't make sense if they need to be dragged around together. As one walks into a room, one uses the display or displays in that room (which might themselves be interconnected).
  • [0100]
    The camera unit 902 can sense the location of the display in space relative to the handheld computer, using for example the four points 955-958 on the corners of the display as references. This allows the handheld device to become an accurate pointer for objects displayed on the screen, including control icons. And it allows the objects on the screen to be sensed directly by the camera—if one does not have the capability to spatially synchronize and coordinate the display driver with the handheld computer.
  • [0101]
    The camera can also be used to see gestures of others, as well as the user, and to acquire raw video images of objects in its field
  • [0102]
    A reverse situation also exists where the cameras can be on the wall mounted display, such as cameras 980 and 981 can be used to look at the handheld computer module 901 and determine its position and orientation relative to the display.
  • [0103]
    Note that a camera such as 902, looking at you the user, if attached to hand held unit, always has reference frame of that unit. If one works with a screen on a wall, one can aim the handheld unit with camera at it, and determine its reference frame to the handheld unit. Also can have two cameras operating together, one looking at wall thing, other at you (as 902 and 902 a) in this manner, one can dynamically compare ref frames of the display to the human input means in determining display parameters. This can be done in real time, and if so one can actually wave the handheld unit around while still imputing accurate data to the display using ones fingers, objects or whatever.
  • [0104]
    Use of a laser pointer such as 930 incorporated into the handheld unit has also been disclosed in the referenced co-pending applications. For example, a camera on the hand held computer unit such as 902 viewing in direction 902 a would look at laser spot such as 990 (which might or might not have come from the computers own laser pointer 930) on the wall display say, and recognized by color and size/shape reference to edge of screen, and to projected spots on screen
  • FIG. 7 Internet and Other Remote Applications
  • [0105]
    FIG. 7A illustrates new methods for internet commerce and other activities involving remote operation with 3D virtual objects displayed on a screen. This application also illustrates the ability of the invention to prevent computer vision eye strain.
  • [0106]
    Let us first consider the operation of the invention over the internet as it exists today in highly bandwidth limited form dependent on ordinary phone lines for the most part. In this case it is highly desirable to transmit just the locations or pointing vectors of portions (typically determined by stereo photo-grammetry of the invention) of human users or objects associated therewith to a remote location, to allow the remote computer to modify the image or sound transmitted back to the user.
  • [0107]
    Another issue is the internet time delay, which can exist in varying degrees, and is more noticeable, the higher resolution of the imagery transmitted. In this case, a preferred arrangement is to have real time transmission of minimal position and vector data (using no more bandwidth than voice), and to transmit back to the user, quasi stationary images at good resolution. Transmission of low resolution near real time images common in internet telephony today, does not convey the natural feeling desired for many commercial applications to now be discussed. As bandwidth becomes more plentiful these restrictions are eased.
  • [0108]
    Let us consider the problem posed of getting information from the internet of today. A user 1000 can go to a virtual library displayed on screen 1001 controlled by computer 1002 where one sees a group 1010 of books on stacks. Using the invention as described herein and incorporated referenced applications to determine my hand and finger locations I the user, can point at a book such as 1014 in a computer sensed manner, or even reach out and “grab” a book, such as 1020 (dotted lines) apparently generated in 3D in front of me.
  • [0109]
    My pointing, or my reach and grab is in real time, and the vector (such as the pointing direction of ones finger at the book on the screen, or the position and orientation closing vectors of ones forefinger and thumb to grab the 3D image 1020 of the book) indicating the book in question created is transmitted back by internet means to the remote computer 1030 which determines that I have grabbed the book entitled War and Peace from the virtual shelf. A picture of the book coming off the shelf is then generated using fast 3D graphical imagery such as the Merlin VR package available today from Digital Immersion company of Sudbury, Ontario. This picture (and the original picture of the books on the shelves) can be retransmitted over the internet at low resolution (but sufficient speed) to give a feeling of immediacy to the user. Or alternatively, the imagery can be generated locally at higher resolution using the software package resident in the local computer 1002 which receives key commands from the distant computer 1030.
  • [0110]
    After the book has been “received” by the user, It then can be opened automatically to the cover page for example under control of the computer, or the users hands can pretend to open it, and the sensed hands instruct the remote (or local, depending on version) computer to do so. A surrogate book such as 1040 can also be used to give the user a tactile feel of a book, even though the real book in questions pages will be viewed on the display screen 1001. One difference to this could be if the screen 1001 depicting the books were life size, like real stacks. Then one might wish to go over to a surrogate book incorporating a separate display screen—just as one would in a real library, go to a reading table after removing a book from a stack.
  • [0111]
    Net Grocery stores have already appeared, and similar applications concern picking groceries off of the shelf of a virtual supermarket, and filling ones shopping cart. For that matter, any store where it is desired to show the merchandise in the very manner people are accustomed to seeing it—namely on shelves or racks, generally as one walks down an aisle, or fumbles through a rack of clothes for example. In each case, the invention, which also can optionally use voice input, as if to talk to a clothing sales person, can be used to monitor the person's positions and gestures.
  • [0112]
    The invention in this mode can also be used to allow one to peruse much larger objects. For example, to buy a car (or walk through a house, say) over the internet, one can lift the hood, look inside, etc all by using the invention to monitor the 3D position of your head or hands and move the image of the car presented accordingly. If the image is presented substantially life-size, then one can be monitored as one physically walks around the car in ones room say, with the image changing accordingly. In other words just as today.
  • [0113]
    Note that while the image can be apparently life-size using virtual reality glasses, the natural movements one is accustomed to in buying a car are not present. This invention makes such a natural situation possible (though it can also be used with such glasses as well).
  • [0114]
    It is noted that the invention also comprehends adding a force based function to a feedback to your hands, such that it feels like you lifted the hood, or grabbed the book, say. For this purpose holding a surrogate object as described in co-pending applications could be useful, in this case providing force feedback to the object.
  • [0115]
    If one looks at internet commerce today, some big applications have turned out to be clothes and books. Clothes are by far the largest expenditure item, and lets look closer at this.
  • [0116]
    Consider too a virtual manikin, which can also have measurements of a remote shopper. For example, consider diagram 7B, where a woman's measurements are inputted by known means such as a keyboard 1050 over the internet to a CAD program in computer 1055, which creates on display screen 1056 a 3D representation of a manikin 1059 having the woman's shape in the home computer 1060. As she selects a dress 1065 to try on, the dress which lets say comes in 10 sizes 5 to 15, is virtually “tried on” the virtual manikin and the woman 1070 looks at the screen 1056 and determines the fit of a standard size 12 dress. She can rapidly select larger or smaller sizes and decide which she thinks looks and/or fits better.
  • [0117]
    Optionally, she can signal to the computer to rotate the image in any direction, and can look at it from different angles up or down as well, simply doing a rotation in the computer. This signaling can be conventional using for example a mouse, or can be using TV based sensing aspects of the invention such as employing camera 1070 also as shown in FIG. 1 for example. In another such case, she can reach out with her finger 1075 for example, and push or pull in a virtual manner the material, using the camera to sense the direction of her finger. Or she can touch herself at the points where the material should be taken up or let out, with the camera system sensing the locations of touch (typically requiring at least a stereo pair of cameras or other electro-optical system capable of determining where her finger tip is in 3D space. Note that a surrogate for the tried on dress in this case, could be the dress she has on, which is touched in the location desired on the displayed dress.
  • [0118]
    The standard size dress can then be altered and shipped to her, or the requisite modifications can be made in the CAD program, and a special dress cut out and sewed which would fit better.
  • [0119]
    A person can also use her hands via the TV cameras of the invention to determine hand location relative to the display to take clothes off a virtual manikin which could have a representation of any person real or imaginary. Alternatively she can remotely reach out using the invention to a virtual rack of clothes such as 1090, take an object off the rack, and put it on the manikin. This is particularly natural in near life-size representation, just like being in a store or other venue. This ability of the invention to bring real life experience to computer shopping and other activity that is a major advantage.
  • [0120]
    The user can also feel the texture of the cloth if suitable haptic devices are available to the user, which can be activated remotely by the virtual clothing program, or other type of program.
  • [0121]
    Modifications of the invention herein disclosed will occur to persons skilled in the art, and all such modifications are deemed to be within the scope of the invention as defined by the appended claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3846826 *15 Jan 19735 Nov 1974R MuellerDirect television drawing and image manipulating system
US3909002 *4 Mar 197430 Sep 1975David LevyData-processing system for determining gains and losses from bets
US4014000 *29 Mar 197622 Mar 1977Hitachi, Ltd.Pattern recognition system utilizing a plurality of partial standard patterns
US4146924 *22 Sep 197527 Mar 1979Board Of Regents For Education Of The State Of Rhode IslandSystem for visually determining position in space and/or orientation in space and apparatus employing same
US4199137 *7 Oct 197722 Apr 1980Giguere Andre MApparatus for foot rehabilitation
US4219847 *22 Feb 197926 Aug 1980Canadian Patents & Development LimitedMethod and apparatus of determining the center of area or centroid of a geometrical area of unspecified shape lying in a larger x-y scan field
US4305131 *31 Mar 19808 Dec 1981Best Robert MDialog between TV movies and human viewers
US4339798 *17 Dec 197913 Jul 1982Remote DynamicsRemote gaming system
US4375674 *17 Oct 19801 Mar 1983The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationKinesimetric method and apparatus
US4396945 *19 Aug 19812 Aug 1983Solid Photography Inc.Method of sensing the position and orientation of elements in space
US4416924 *23 Sep 198222 Nov 1983Celanese CorporationPolycarbonate sizing finish and method of application thereof
US4435835 *24 Mar 19816 Mar 1984Hitachi, Ltd.Method and device for detecting the position of an object
US4475122 *9 Nov 19812 Oct 1984Tre Semiconductor Equipment CorporationAutomatic wafer alignment technique
US4484179 *23 Dec 198120 Nov 1984At&T Bell LaboratoriesTouch position sensitive surface
US4542375 *11 Feb 198217 Sep 1985At&T Bell LaboratoriesDeformable touch sensitive surface
US4602280 *5 Dec 198322 Jul 1986Maloomian Laurence GWeight and/or measurement reduction preview system
US4613942 *22 Oct 198423 Sep 1986Chen Richard MOrientation and control system for robots
US4629319 *14 Feb 198416 Dec 1986Diffracto Ltd.Panel surface flaw inspection
US4631676 *25 May 198323 Dec 1986Hospital For Joint Diseases OrComputerized video gait and motion analysis system and method
US4631847 *1 Dec 198030 Dec 1986Laurence ColinEncapsulated art
US4654872 *24 Jul 198431 Mar 1987Omron Tateisi Electronics Co.System for recognizing three-dimensional objects
US4654949 *19 Mar 19857 Apr 1987Diffracto Ltd.Method for automatically handling, assembling and working on objects
US4672564 *15 Nov 19849 Jun 1987Honeywell Inc.Method and apparatus for determining location and orientation of objects
US4686374 *16 Oct 198511 Aug 1987Diffracto Ltd.Surface reflectivity detector with oil mist reflectivity enhancement
US4687200 *9 Aug 198518 Aug 1987Nintendo Co., Ltd.Multi-directional switch
US4843568 *11 Apr 198627 Jun 1989Krueger Myron WReal time perception of and response to the actions of an unencumbered participant/user
US4988981 *28 Feb 198929 Jan 1991Vpl Research, Inc.Computer data entry and manipulation apparatus and method
US5008946 *9 Sep 198816 Apr 1991Aisin Seiki K.K.System for recognizing image
US5072294 *5 Feb 199110 Dec 1991Loredan Biomedical, Inc.Method and apparatus for analyzing a body having a marker located thereon
US5088928 *16 Mar 199018 Feb 1992Chan James KEducational/board game apparatus
US5148591 *9 Feb 199022 Sep 1992Sensor Adaptive Machines, Inc.Vision target based assembly
US5168531 *27 Jun 19911 Dec 1992Digital Equipment CorporationReal-time recognition of pointing information from video
US5206733 *3 Dec 199127 Apr 1993Holdredge Terry KConvertible visual display device
US5227986 *10 Jan 199113 Jul 1993Kurashiki Boseki Kabushiki KaishaSpectrometric method free from variations of error factors
US5297061 *19 May 199322 Mar 1994University Of MarylandThree dimensional pointing device monitored by computer vision
US5325472 *12 Apr 199128 Jun 1994Matsushita Electric Industrial Co., Ltd.Image displaying system for interactively changing the positions of a view vector and a viewpoint in a 3-dimensional space
US5388059 *30 Dec 19927 Feb 1995University Of MarylandComputer vision system for accurate monitoring of object pose
US5454043 *30 Jul 199326 Sep 1995Mitsubishi Electric Research Laboratories, Inc.Dynamic and static hand gesture recognition through low-level image analysis
US5459793 *2 Sep 199217 Oct 1995Fujitsu LimitedMotion analysis system
US5491507 *22 Oct 199313 Feb 1996Hitachi, Ltd.Video telephone equipment
US5506682 *6 Mar 19919 Apr 1996Sensor Adaptive Machines Inc.Robot vision using targets
US5521616 *18 Feb 199428 May 1996Capper; David G.Control interface apparatus
US5566283 *1 Mar 199515 Oct 1996Dainippon Printing Co., Ltd.Computer graphic image storage, conversion and generating apparatus
US5581276 *8 Sep 19933 Dec 1996Kabushiki Kaisha Toshiba3D human interface apparatus using motion recognition based on dynamic image processing
US5594469 *21 Feb 199514 Jan 1997Mitsubishi Electric Information Technology Center America Inc.Hand gesture machine control system
US5616078 *27 Dec 19941 Apr 1997Konami Co., Ltd.Motion-controlled video entertainment system
US5617312 *18 Nov 19941 Apr 1997Hitachi, Ltd.Computer system that enters control information by means of video camera
US5624117 *9 Mar 199529 Apr 1997Sugiyama Electron Co., Ltd.Game machine controller
US5754227 *28 Sep 199519 May 1998Ricoh Company, Ltd.Digital electronic camera having an external input/output interface through which the camera is monitored and controlled
US5772522 *23 Nov 199430 Jun 1998United States Of Golf AssociationMethod of and system for analyzing a golf club swing
US5781647 *27 Oct 199714 Jul 1998Digital Biometrics, Inc.Gambling chip recognition system
US5802220 *15 Dec 19951 Sep 1998Xerox CorporationApparatus and method for tracking facial motion through a sequence of images
US5828770 *20 Feb 199627 Oct 1998Northern Digital Inc.System for determining the spatial position and angular orientation of an object
US5845006 *18 Jul 19961 Dec 1998Jiro HiraishiMethod of processing image formation
US5853327 *21 Feb 199629 Dec 1998Super Dimension, Inc.Computerized game board
US5870771 *15 Nov 19969 Feb 1999Oberg; Larry B.Computerized system for selecting, adjusting, and previewing framing product combinations for artwork and other items to be framed
US5878174 *12 Nov 19962 Mar 1999Ford Global Technologies, Inc.Method for lens distortion correction of photographic images for texture mapping
US5889505 *3 Mar 199730 Mar 1999Yale UniversityVision-based six-degree-of-freedom computer input device
US5913727 *13 Jun 199722 Jun 1999Ahdoot; NedInteractive movement and contact simulation game
US5914660 *26 Mar 199822 Jun 1999Waterview LlcPosition monitor and alarm apparatus for reducing the possibility of sudden infant death syndrome (SIDS)
US5926168 *5 Sep 199520 Jul 1999Fan; Nong-QiangRemote pointers for interactive televisions
US5940126 *24 Oct 199517 Aug 1999Kabushiki Kaisha ToshibaMultiple image video camera apparatus
US5966310 *11 Feb 199712 Oct 1999Sanyo Electric Co., Ltd.Personal design system and personal equipment production system for actually producing equipment having designed appearance
US5982352 *29 Jun 19959 Nov 1999Pryor; Timothy R.Method for providing human input to a computer
US6005548 *14 Aug 199721 Dec 1999Latypov; Nurakhmed NurislamovichMethod for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US6030290 *24 Jun 199729 Feb 2000Powell; Donald EMomentary contact motion switch for video games
US6043805 *24 Mar 199828 Mar 2000Hsieh; Kuan-HongControlling method for inputting messages to a computer
US6049327 *23 Apr 199711 Apr 2000Modern Cartoons, LtdSystem for data management based onhand gestures
US6057856 *16 Sep 19972 May 2000Sony Corporation3D virtual reality multi-user interaction with superimposed positional information display for each user
US6084979 *20 Jun 19964 Jul 2000Carnegie Mellon UniversityMethod for creating virtual reality
US6097369 *2 Feb 19951 Aug 2000Wambach; Mark L.Computer mouse glove
US6098458 *6 Nov 19958 Aug 2000Impulse Technology, Ltd.Testing and training system for assessing movement and agility skills without a confining field
US6147678 *9 Dec 199814 Nov 2000Lucent Technologies Inc.Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6160986 *19 May 199812 Dec 2000Creator LtdInteractive toy
US6198485 *29 Jul 19986 Mar 2001Intel CorporationMethod and apparatus for three-dimensional input entry
US6198847 *26 Sep 19976 Mar 2001Canon Kabushiki KaishaApparatus and method for recognizing a nonuniformly sampled pattern
US6204852 *9 Dec 199820 Mar 2001Lucent Technologies Inc.Video hand image three-dimensional computer interface
US6271752 *2 Oct 19987 Aug 2001Lucent Technologies, Inc.Intelligent multi-access system
US6342917 *16 Jan 199829 Jan 2002Xerox CorporationImage recording apparatus and method using light fields to track position and orientation
US6373472 *17 May 199616 Apr 2002Silviu PalalauDriver control interface system
US6430997 *5 Sep 200013 Aug 2002Trazer Technologies, Inc.System and method for tracking and assessing movement skills in multidimensional space
US6437820 *9 Jul 199920 Aug 2002Qualisys AbMotion analysis system
US6442465 *20 Apr 200127 Aug 2002Automotive Technologies International, Inc.Vehicular component control systems and methods
US6453180 *2 Dec 199817 Sep 2002Pioneer Electronic CorporationVehicle-installed telephone apparatus
US6508709 *18 Jun 199921 Jan 2003Jayant S. KarmarkarVirtual distributed multimedia gaming method and system based on actual regulated casino games
US6597817 *10 Jul 199822 Jul 2003Silverbrook Research Pty LtdOrientation detection for digital cameras
US6727887 *5 Jan 199527 Apr 2004International Business Machines CorporationWireless pointing device for remote cursor control
US6775361 *28 Apr 199910 Aug 2004Canon Kabushiki KaishaRecording/playback apparatus with telephone and its control method, video camera with telephone and its control method, image communication apparatus, and storage medium
US6788336 *10 Jul 19987 Sep 2004Silverbrook Research Pty LtdDigital camera with integral color printer and modular replaceable print roll
US6911972 *29 Mar 200228 Jun 2005Matsushita Electric Industrial Co., Ltd.User interface device
US6954906 *29 Sep 199711 Oct 2005Sony CorporationImage display processing apparatus that automatically changes position of sub-window relative to main window depending on distance at watch sub window is commanded to be displayed
US7489863 *27 Jul 200510 Feb 2009Lg Electronics Inc.Image signal processing apparatus and method thereof in mobile communications terminal
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8928590 *15 May 20126 Jan 2015Edge 3 Technologies, Inc.Gesture keyboard method and apparatus
US948377115 Mar 20121 Nov 2016At&T Intellectual Property I, L.P.Methods, systems, and products for personalized haptic emulations
US95114955 Mar 20136 Dec 2016Samsung Electronics Co., Ltd.Method and apparatus for remote monitoring
US20090091617 *6 Oct 20089 Apr 2009Anderson Leroy EElectronic baby remote viewer
US20100060448 *5 Sep 200811 Mar 2010Larsen PriscillaBaby monitoring apparatus
US20150089436 *30 Nov 201426 Mar 2015Edge 3 Technologies, Inc.Gesture Enabled Keyboard
U.S. Classification348/77, 348/E07.085
International ClassificationG06F3/038, H04N7/18, G06F3/00, G06F3/042, G06F3/01
Cooperative ClassificationG06F3/017, G06F3/0304, G06F3/0325, G06F3/0386
European ClassificationG06F3/038L, G06F3/03H6, G06F3/01G
Legal Events
12 Sep 2013ASAssignment
Effective date: 20130731