US20130002539A1 - System and method for interacting with a display - Google Patents

System and method for interacting with a display Download PDF

Info

Publication number
US20130002539A1
US20130002539A1 US13/614,200 US201213614200A US2013002539A1 US 20130002539 A1 US20130002539 A1 US 20130002539A1 US 201213614200 A US201213614200 A US 201213614200A US 2013002539 A1 US2013002539 A1 US 2013002539A1
Authority
US
United States
Prior art keywords
gesture
user
action
command
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/614,200
Inventor
Mark D. DENNARD
Douglas J. McCULLOCH
Mark J. SCHUNZEL
Matthew B. TREVATHAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/614,200 priority Critical patent/US20130002539A1/en
Publication of US20130002539A1 publication Critical patent/US20130002539A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/26Projecting separately subsidiary matter simultaneously with main image

Definitions

  • the invention generally relates to a system and method for interacting with a projected display and, more particularly, to a system and method for interacting with a projected display utilizing gestures capable of executing menu driven commands and other complex command structures.
  • Businesses strive for efficiencies throughout their organization. These efficiencies result in increased productivity of their employees which, in turn, results in increased profitability for the business and, if publicly traded, its shareholders. To achieve such efficiencies, by way of examples, it is not uncommon to hold meetings or make presentations to audiences to discuss new strategies, advances in the industry and new technologies, etc.
  • whiteboards are one way to present material relevant to the presentation or meeting.
  • a whiteboard allows a presenter to write using special “dry erase” markers.
  • dry erase markers
  • an attendee may save the material by manually copying the text in a notebook before the image is erased by the presenter.
  • a problem with this approach is that it is both time consuming and error prone.
  • the use of whiteboards is limited because it is difficult to draw charts or other graphical images and it is not possible to manipulate data.
  • the presenter can present charts or other graphical images to an audience by optically projecting these images onto a projection screen or a wall.
  • an LCD (liquid crystal display) projector is commonly used as the image source, where the charts, text, or other graphical images are electronically generated by a display computer, such as a personal computer (PC) or a laptop computer.
  • the PC provides video outputs, but interaction with the output is limited, at best.
  • a conventional system requires the presenter to return to the display computer so as to provide control for the presentation.
  • the presenter controls the displayed image by means of keystrokes or by “mouse commands” with a cursor in the appropriate area of the computer monitor display screen.
  • an operator may use a remote control device to wirelessly transmit control signals to a projector sensor.
  • the presenter acquires some mobility by means of the remote control device, the presenter still cannot interact with the data on the screen itself; that is, the operator is limited to either advancing or reversing the screen.
  • a method comprises recognizing a disturbance in a display zone of a projected image and displaying a selected state in response to the recognized disturbance.
  • the method further includes recognizing a gesture which interrupts a light source and is associated with an action to be taken on or associated with the displayed selected state. An action is executed in response to the recognized gesture.
  • the method comprises projecting an image on a surface using at least a source of light and a processor configured to store and execute application programs associated with the image.
  • the method senses a first action in a display zone of the image and validates the first action.
  • the method displays a selected state in response to the validated first action.
  • the method further senses a gesture interrupting the light source and validates that the gesture is associated with a pre-defined command and the displayed selected state.
  • the method executes the pre-defined command in response to the validated gesture.
  • a system comprises a server having a database containing data associated with at least one or more predefined gestures, and at least one of a hardware and software component for executing an action based on the at least one or more predefined gestures.
  • the hardware and software compares a first action in an interaction zone to a predefined template of a shape, and a second action, which interrupts a light source, to the at least one or more predefined gestures.
  • the system validates the first action and the second action based on the comparison to the predefined template and the at least one or more predefined gestures.
  • the system executes the action based on the validating of the first action and the second action.
  • a computer program product comprising a computer usable medium having readable program code embodied in the medium includes at least one component to perform the steps of the invention, as disclosed and recited herein.
  • a method comprises recognizing a first action of a first object and a second action of a second. The method further includes validating a movement comprising a combination of the first action and the second action by comparison to predefined gestures and executing a complex command based on the validating of the combination of the first action and the second action.
  • a method for deploying an application for web searching which comprises providing a computer infrastructure.
  • the computer infrastructure is operable to: project an image on a surface; sense a first action in a predefined interaction zone of the image; validate the first action and displaying a selected state; sense a gesture; validate that the gesture is associated with a pre-defined action; and execute the pre-defined action in response to the validated gesture.
  • FIG. 1 shows an illustrative environment for implementing the steps in accordance with the invention
  • FIG. 2 shows an embodiment of a system in accordance with the invention
  • FIG. 3 is a representation of a range of motion of the system in a representative environment in accordance with an embodiment of the invention
  • FIG. 4 represents a method to correct for distortion of a projected image on a surface or object
  • FIG. 5 shows a system architecture according to an embodiment of the invention
  • FIGS. 6 a and 6 b show a representative look-up table according to an embodiment of the invention
  • FIG. 7 shows an illustrative template used in accordance with an embodiment of the invention.
  • FIG. 8 is a representation of a swim lane diagram implementing steps according to an embodiment of the invention.
  • the invention is directed to a system and method for interacting with a projected display and more specifically to a system and method for interacting with a projected display utilizing gestures capable of executing menu driven commands and other complex command structures.
  • the system and method can be implemented using a single computer, over any distributed network or stand-alone server, for example.
  • the system and method is configured to be used as an interactive touch screen projected onto any surface, and which allows the user to perform and/or execute any command on the interactive touch screen surface without the need for a peripheral device such as, for example, a mouse or keyboard.
  • the system and method is configured to provide device-free, non-tethered interaction with a display projected on any number of different surfaces, objects and/or areas in an environment.
  • the system and method of the invention projects displays on different surfaces such as, for example, walls, desks, presentation boards and the like.
  • the system and method allows complex commands to be executed such as, for example, opening a new file using a drag down menu, or operations such as cutting, copying, pasting or other commands that require more than a single command step. It should be understood, though, that the system and method may also implement and execute single step commands.
  • the commands are executed using gestures, which are captured, reconciled and executed by a computer.
  • the actions to be executed require two distinct actions by the user as implemented by a user's hands, pointers of some kind or any combination thereof.
  • the system and method of the invention does not require any special devices to execute the requested commands and, accordingly, is capable of sensing and supporting forms of interaction such as hand gestures and/or motion of objects, etc. to perform such complex operations.
  • the system and method can be implemented using, for example, the Everywhere DisplayTM, manufactured and sold by International Business Machines Corp. (Everywhere DisplayTM and IBM are trademarks of IBM Corp. in the United States, other countries, or both.)
  • the Everywhere Display can provide computer access in public spaces, facilitate navigation in buildings, localize resources in a physical space, bring computational resources to different areas of an environment, and facilitate the reconfiguration of the workplace.
  • FIG. 1 shows an illustrative environment 10 for managing the processes in accordance with the invention.
  • the environment 10 includes a computer infrastructure 12 that can perform the processes described herein.
  • the computer infrastructure 12 includes a computing device 14 that comprises a management system 30 , which makes computing device 14 operable to perform complex commands using gestures in accordance with the invention, e.g., process described herein.
  • the computing device 14 includes a processor 20 , a memory 22 A, an input/output (I/O) interface 24 , and a bus 26 . Further, the computing device 14 is in communication with an external I/O device/resource 28 and a storage system 22 B.
  • I/O input/output
  • the processor 20 executes computer program code, which is stored in memory 22 A and/or storage system 22 B. While executing computer program code, the processor 20 can read and/or write data from look-up tables which are the basis for the execution of the commands to be performed on the computer, to/from memory 22 A, storage system 22 B, and/or I/O interface 24 .
  • the bus 26 provides a communications link between each of the components in the computing device 14 .
  • the I/O device 28 can comprise any device that enables an individual to interact with the computing device 14 or any device that enables the computing device 14 to communicate with one or more other computing devices using any type of communications link.
  • the computing device 14 can comprise any general purpose computing article of manufacture capable of executing computer program code installed thereon (e.g., a personal computer, server, handheld device, etc.). However, it is understood that the computing device 14 is only representative of various possible equivalent computing devices that may perform the processes described herein. To this extent, in embodiments, the functionality provided by computing device 14 can be implemented by a computing article of manufacture that includes any combination of general and/or specific purpose hardware and/or computer program code. In each embodiment, the program code and hardware can be created using standard programming and engineering techniques, respectively.
  • the computer infrastructure 12 is only illustrative of various types of computer infrastructures for implementing the invention.
  • the computer infrastructure 12 comprises two or more computing devices (e.g., a server cluster) that communicate over any type of communications link, such as a network, a shared memory, or the like, to perform the process described herein.
  • one or more computing devices in the computer infrastructure 12 can communicate with one or more other computing devices external to computer infrastructure 12 using any type of communications link.
  • the communications link can comprise any combination of wired and/or wireless links; any combination of one or more types of networks (e.g., the Internet, a wide area network, a local area network, a virtual private network, etc.); and/or utilize any combination of transmission techniques and protocols.
  • the management system 30 enables the computer infrastructure 12 to recognize gestures and execute associated commands.
  • the invention provides a business method that performs the process steps of the invention on a subscription, advertising, and/or fee basis. That is, a service provider, such as a Solution Integrator, could offer to perform the processes described herein. In this case, the service provider can create, maintain, and support, etc., a computer infrastructure that performs the process steps of the invention for one or more customers. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
  • a service provider such as a Solution Integrator
  • the service provider can create, maintain, and support, etc., a computer infrastructure that performs the process steps of the invention for one or more customers.
  • the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
  • FIG. 2 shows an embodiment of the system of the invention.
  • the system is generally depicted as reference numeral 100 and comprises a projector 110 (e.g., LCD projector) and a computer-controlled pan/tilt mirror 120 .
  • the projector 110 is connected to the display output of a computer 130 , which also controls the mirror 120 .
  • the light of the projector 110 can be directed in any direction within the range of approximately 60 degrees in the vertical axis and 230 degrees in the horizontal axis.
  • the system 100 is capable of projecting a graphical display on most parts of all walls and almost all of the floor or other areas of a room.
  • the projector 110 is a 1200 lumens LCD projector.
  • a camera 140 is also connected to the computer 130 and is configured to capture gestures or motions of the user and provide such gestures or motions to the computer 130 for reconciliation and execution of commands (as discussed in greater detail below).
  • the camera 140 is preferably a CCD based camera which is configured and located to capture motions and the like of the user.
  • the camera 140 and other devices may be connected to the computer via any known networking system as discussed above.
  • FIG. 3 is a representation of a range of motion of the system in a representative environment according to an embodiment of the invention.
  • the system 100 of the invention is configured to project a graphical display on walls, the floor, and a table, for example.
  • the system 100 is capable of projecting images on most any surface within an environment thus transforming most any surface into an interactive display.
  • FIG. 4 represents a graphical methodology to correct for distortion of the projected image caused by oblique projection and by the shape of the projected surface.
  • the image to be projected is inversely distorted prior to projection on the desired surface using, for example, standard computer graphics hardware to speed up the process of distortion control.
  • one methodology relies on the camera 140 and projector 110 having the same focal length. Therefore, to project an image obliquely without distortions it is sufficient to simulate the inverse process (i.e., viewing with a camera) in a virtual 3D-computer graphics world.
  • the system and method of the invention texture-maps the image to be displayed onto a virtual computer graphics 3D surface “VS” identical (minus a scale factor) to the actual surface “AS”.
  • the view from the 3D virtual camera 140 should correspond exactly or substantially exactly to the view of the projector (if the projector was the camera) when:
  • a standard computer graphics board may be used to render the camera's view of the virtual surface and send the computed view to the projector 110 . If the position and attitude of the virtual surface “VS” are correct, the projection of this view compensates the distortion caused by oblique projection or by the shape of the surface.
  • an appropriate virtual 3D surface can be uniquely used and calibrated for each surface where images are projected.
  • the calibration parameters of the virtual 3D surface may be determined manually by projecting a special pattern and interactively adjusting the scale, rotation and position of the virtual surface in the 3D world, and the “lens angle” of the 3D virtual camera.
  • FIG. 5 shows a current system architecture according to an embodiment of the invention.
  • the system architecture includes a three-tier architecture comprising a services layer 300 , an integration layer 310 and an application layer 320 .
  • each of the modules 300 a - 300 f in the services layer 300 exposes a set of capabilities through a http/XML application programming interface (API).
  • API application programming interface
  • modules in the services layer 300 have no “direct” knowledge or dependence on other modules in the layer; however, the modules 300 a - 300 f may share a common XML language along with a dialect for communication with each module in the services layer 300 .
  • the services layer 300 includes six modules 300 a - 300 f.
  • a vision interface module (vi) 300 a may be responsible for recognizing gestures and converting this information to the application (e.g., program being manipulated by the gestures).
  • a projection module (pj) 300 b may handle the display of visual information (via the projector) on a specified surface while a camera module (sc) 300 c provides the video input (via the camera) from the surface of interest to the vision interface (vi) 300 a.
  • the camera as discussed above, will send the gestures and other motions of the user.
  • Interaction with the interface by the user comprises orchestrating the vision interface 300 a, projection module 300 b and camera module 300 c through a sequence of synchronous and asynchronous commands, which are capable of being implemented by those of skill in the art.
  • Other modules present in the services layer 300 include a 3D environment modeling module 300 d, a user localization module 300 e, and a geometric reasoning module 300 f.
  • the 3D environment modeling module 300 d can be a version of standard 3D modeling software.
  • the 3D environment modeling module 300 d can support basic geometric objects built out of planar surfaces and cubes and allows importing of more complex models.
  • the 3D environment modeling module 300 d stores the model in XML format, with objects as tags and annotations as attributes.
  • the 3D environment modeling module 300 d is also designed to be accessible to the geometric reasoning module 300 f, as discussed below.
  • the geometric reasoning module 300 f is a geometric reasoning engine that operates on a model created by a modeling toolkit which, in embodiments, is a version of standard 3D modeling software.
  • the geometric reasoning module 300 f enables automatic selection of the appropriate display and interaction zones (hotspots) based on criteria such as proximity of the zone to the user and non-occlusion of the zone by the user or by other objects. In this manner, gestures can be used to manipulate and execute program commands and/or actions.
  • Applications or other modules can query the geometric reasoning module 300 F through a defined XML interface.
  • the geometric reasoning module 300 f receives a user position and a set of criteria, specified as desired ranges of display zone properties, and returns all display zones which satisfy the specified criteria.
  • the geometric reasoning module 300 f may also have a look-up table or access thereto for determining gestures of a user, which may be used to implement the actions or commands associated with a certain application.
  • the properties for a display zone may include, amongst other properties, the following:
  • the user localization module 300 e is, in embodiments, a real-time camera-based tracking to determine the position of the user in the environment, as well as, in embodiments, gestures of the user.
  • the user localization module 300 e can be configured to track the user's motion to, for example, move the display to the user or, in further embodiments, recognize gestures of the user for implementing actions or commands.
  • the tracking technique is based on motion, shape, and/or flesh-tone cues.
  • a differencing operation on consecutive frames of the incoming video can be performed.
  • a morphological closing operation then removes noise and fills up small gaps in the detected motion regions.
  • a standard contour- tracing algorithm then yields the bounding contours of the segmented regions.
  • the contours are smoothed and the orientation and curvature along the contour is computed.
  • the shape is analyze for each contour to check if it could be a head or other body part or object of interest, which is tracked by the system and method of the invention.
  • the system looks for curvature changes corresponding to a head-neck silhouette (e.g., concavities at the neck points and convexity at the top of the head).
  • sufficient flesh-tone color within the detected head region is detected by matching the color of each pixel within the head contour with a model of flesh tone colors in normalized r-g space.
  • This technique detects multiple heads in real time. In embodiments, multiple cameras with overlapping views to triangulate and estimate the 3D position of the user are possible. This same technique can be used to recognize gestures in order for the user to interact with the display, e.g., provide complex commands.
  • the integration layer 310 provides a set of classes that enable a JAVA application to interact with the services. (Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.)
  • the integration layer 310 in embodiments, contains a set of JAVA wrapper objects for all objects and commands, along with classes enabling synchronous and asynchronous communication with modules in the services layer 300 .
  • the integration layer 310 mediates the interaction among the services layer modules 300 a - 300 f.
  • a JAVA application can start an interaction that sends commands to the vision interface, the projection module and the mirror defining, instantiating, activating, and managing a complex interactive display interaction.
  • the integration layer 310 can coordinate the geometric reasoning module and the 3D environment modeler in a manner that returns the current user position along with all occluded surfaces to the application at a specified interval.
  • the application layer 320 comprises a set of classes and tools for defining and running JAVA applications and a repository of reusable interactions.
  • each interaction is a reusable class that is available to any application.
  • An application class for example, is a container for composing multiple interactions, maintaining application state during execution, and controlling the sequence of interactions through the help of a sequence manager 320 a.
  • Other tools may also be implemented such as, for example, a calibrator tool that allows a developer to calibrate the vision interface module 300 a, the projection module 300 b and the camera module 300 c for a particular application.
  • the user interacts with the projected display by using hand gestures over the projected surface, as if the hands, for example, were a computer mouse.
  • Techniques described above, such as, for example, using the geometric reasoning module 300 f or the user localization module 300 e can be implemented to recognize such gesturing.
  • the geometric reasoning module 300 f may use an occlusion mask, which indicates the parts of a display zone occluded by objects such as, for example, hand gestures of the user.
  • the camera may perform three basic steps: (i) detecting when the user is pointing; (ii) tracking where the user is pointing; and (iii) detecting salient events such as a button touch from the pointing trajectory and gestures of the user. This may be performed, for example, by detecting an occlusion of the projected image over a certain zone, such as, for example, an icon or pull down menu. This information is then provided to the computer, which then reconciles such gesture with a look-up table, for example.
  • FIGS. 6 a and 6 b show a representative look-up table according to an embodiment of the invention. Specifically, it is shown that many complex commands can be executed using gestures such as, for example, a single left click of the mouse by the user moving his or her hand in a clockwise rotation. Other gestures are also contemplated by the invention such as those shown in the look-up tables of FIGS. 6 a and 6 b . It should be understood, though, that the gestures shown in FIGS. 6 a and 6 b should be considered merely illustrative examples.
  • a complex command can be executed based on a combination of movements by two (or more) objects, such as, for example, both of the user's hands.
  • the system and method of the invention would attempt to reconcile and/or verify a motion (gesture) of each object, e.g., both hands, using the look-up table of FIGS, 6 a and 6 b, for example. If both of the motions cannot be independently verified in the look-up table, for example, the system and method would attempt to reconcile and/or verify both of the motions using a look-up table populated with actions associated with combination motions.
  • an “S” motion of both hands which are motions not recognized, independently, may be a gesture for taking an action such as, requesting insertion of a “watermark” in a word processing application. It should be recognized by those of skill in the art that all actions, whether for a single motion or combination of motions, etc. may be populated in a single look-up table or multiple look-up tables, without any limitations.
  • FIG. 7 shows a template which may be implemented in embodiments of the invention. Even though the appearance of an object will change as it moves across the projected image, it will create a region of changed pixels that retains the basic shape of the moving object. To find pointing fingertips, for example, each video frame is subtracted from the frame before it, removing noise with simple computational morphology, and then convolving a fingertip template “T” over the difference image using a matching function. If the template “T” does not match well in the image, it can be assumed that the user is not pointing or gesturing.
  • the fingertip template of FIG. 7 is kept short, in embodiments, so that it will match fingertips that extend only slightly beyond their neighbors and will match fingertips within a wider range of angles. As a result, the template often matches well at several points in the image. It should be readily understood that other templates may also be used with the invention such as, for example, pointers and other objects. These templates may also be used for implementing and recognizing the gestures, referring back to FIGS. 6 a and 6 b.
  • FIG. 8 is a swim lane diagram showing steps of an embodiment of the invention. “Swim lane” diagrams may be used to show the relationship between the various “components” in the processes and to define the steps involved in the processes. FIG. 8 may equally represent a high-level block diagram of components of the invention implementing the steps thereof.
  • the steps of FIG. 8 may be implemented on computer program code in combination with the appropriate hardware.
  • This computer program code may be stored on storage media such as a diskette, hard disk, CD-ROM, DVD-ROM or tape, as well as a memory storage device or collection of memory storage devices such as read-only memory (ROM) or random access memory (RAM). Additionally, the computer program code can be transferred to a workstation over the Internet or some other type of network.
  • the steps of FIG. 8 may also be implemented by the embodiment of FIG. 1 .
  • FIG. 8 shows a process flow diagram, describing a scenario in which a user performs a series of actions using the gesture based user interaction grammar provided herein.
  • a user approaches a wall or other surface on which the system of the invention has projected a User Interface (UI) for a given application.
  • UI User Interface
  • the user is recognized by the camera as they disturb the field of vision for the camera.
  • the user opts to open a menu from an icon in the UI via a right-click action.
  • the user selects the icon with dominant hand (e.g., left hand).
  • button touches are detected by examining the hand trajectory for several specific patterns that indicate this type of motion.
  • the camera recognizes the disturbance of the “hotspot” (zone) associated to the selected icon, and calls the system to validate that the shape of the disturbance is identified in the template.
  • a determination is made to establish if the shape of the disturbing object is a valid shape in the template. If not, then at step 825 , no action is taken; however, as described above, in embodiments, the system may recognize a second disturbance or gesture, at which time the system will make a determination that the combination of the first and second motions (e.g., disturbances) are a unique, valid gesture for an action to be taken.
  • the system displays the selected state of the selected icon at step 830 .
  • the system may recognize two gesture simultaneously, at which time, the system will make a determination as to whether the combination of gestures is associated with an action. If so, an appropriate action will be taken. This same or similar processing may continue with other examples.
  • the user uses the non-dominant hand (e.g., right hand) to articulate the gesture associated to a “right-click” action, for example (counter-clockwise rotation, see look-up table of FIGS. 6 a and 6 b ).
  • the camera recognizes the articulation of the gesture, and at step 845 , the system performs lookup to validate that the gesture resides in the system and is associated to an action.
  • the action e.g., display open menu.
  • the user selects from one of “X” number of possible navigational menu options.
  • the camera recognizes the disturbance of the hotspot (interaction zone) associated to the selected menu item, and calls to validate that the shape of the disturbance is identified in the template.
  • a determination is made as to whether the shape of the disturbing object is a valid shape in the template. If not recognized, then the system reverts back to step 825 and takes no action. If the gesture is valid (recognized), then at step 875 , the system displays the selected state of the selected menu item.
  • the user uses the non-dominant hand, for example, to articulate the gesture associated to a “single left-click” action (single clockwise rotation, see look-up table of FIGS. 6 a and 6 b ).
  • the camera recognizes the articulation of the gesture, and the system performs lookup to validate that the gesture resides in the system and is associated to an action.
  • the system makes a determination if the gesture is associated with an action. If not, the system again reverts back to step 825 . If there is an associated action, at step 900 , the system execute the associated action (navigate user to associated screen in the UI, in this case). The process then ends at “E”.
  • a user points to a particular zone within the display area, e.g., a certain application.
  • the system of the invention would recognize such action by the methods noted above.
  • the system would “lock” that selection. Once locked, the user can then provide a gesture such as, for example, an “e” shape to exit the application, which will then verified and executed by the system of the invention.

Abstract

A system and method of interacting with a display. The method comprises recognizing a disturbance in a display zone of a projected image and displaying a selected state in response to the recognized disturbance. The method further includes recognizing a gesture which interrupts a light source and is associated with an action to be taken on or associated with the displayed selected state. An action is executed in response to the recognized gesture. The system includes a server having a database containing data associated with at least one or more predefined gestures, and at least one of a hardware and software component for executing an action based on the at least one or more predefined gestures.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • The present application is a continuation of U.S. application Ser. No. 11/552,811, filed Oct. 25, 2006, the contents of which are incorporated by reference herein in their entirety.
  • FIELD OF THE INVENTION
  • The invention generally relates to a system and method for interacting with a projected display and, more particularly, to a system and method for interacting with a projected display utilizing gestures capable of executing menu driven commands and other complex command structures.
  • BACKGROUND OF THE INVENTION
  • Businesses strive for efficiencies throughout their organization. These efficiencies result in increased productivity of their employees which, in turn, results in increased profitability for the business and, if publicly traded, its shareholders. To achieve such efficiencies, by way of examples, it is not uncommon to hold meetings or make presentations to audiences to discuss new strategies, advances in the industry and new technologies, etc.
  • In such meetings, presentation boards or so-called “whiteboards” are one way to present material relevant to the presentation or meeting. As is well known, a whiteboard allows a presenter to write using special “dry erase” markers. When the text is no longer needed such material may be erased so that the user can continue with the presentation, for example. But unfortunately, often the text needs to be saved in order to refer back to the material or place new material in the proper context. In these situations, an attendee may save the material by manually copying the text in a notebook before the image is erased by the presenter. A problem with this approach is that it is both time consuming and error prone. Also, the use of whiteboards is limited because it is difficult to draw charts or other graphical images and it is not possible to manipulate data.
  • In another approach, it is not uncommon to use large scrolls or tear off pieces of paper to make the presentation. By using this approach, the presenter merely removes the paper from the pad (or rolls the paper) and then continues with the next sheet. This approach, though, can be cumbersome and although it allows the presenter to refer back to past writings, it is not very efficient. Additionally, this can result in many different sheets or very large scrolls of one sheet which can become confusing to the audience and, even, the presenter. Also, as with the above approach, it is difficult to draw charts or other graphical images, and it is not possible to manipulate data.
  • In a more technology efficient approach, the presenter can present charts or other graphical images to an audience by optically projecting these images onto a projection screen or a wall. In known applications, an LCD (liquid crystal display) projector is commonly used as the image source, where the charts, text, or other graphical images are electronically generated by a display computer, such as a personal computer (PC) or a laptop computer. In such display systems, the PC provides video outputs, but interaction with the output is limited, at best.
  • Also, whether the presenter is standing at a lectern, or is moving about before the audience, there is little direct control over the image being displayed upon the projection screen when using a conventional LCD/PC projection display system. For example, a conventional system requires the presenter to return to the display computer so as to provide control for the presentation. At the display computer, the presenter controls the displayed image by means of keystrokes or by “mouse commands” with a cursor in the appropriate area of the computer monitor display screen.
  • In some applications, an operator may use a remote control device to wirelessly transmit control signals to a projector sensor. Although the presenter acquires some mobility by means of the remote control device, the presenter still cannot interact with the data on the screen itself; that is, the operator is limited to either advancing or reversing the screen.
  • Accordingly, there exists a need in the art to overcome the deficiencies and limitations described hereinabove.
  • SUMMARY OF THE INVENTION
  • In a first aspect of the invention, a method comprises recognizing a disturbance in a display zone of a projected image and displaying a selected state in response to the recognized disturbance. The method further includes recognizing a gesture which interrupts a light source and is associated with an action to be taken on or associated with the displayed selected state. An action is executed in response to the recognized gesture.
  • In another aspect of the invention, the method comprises projecting an image on a surface using at least a source of light and a processor configured to store and execute application programs associated with the image. The method senses a first action in a display zone of the image and validates the first action. The method displays a selected state in response to the validated first action. The method further senses a gesture interrupting the light source and validates that the gesture is associated with a pre-defined command and the displayed selected state. The method executes the pre-defined command in response to the validated gesture.
  • In another aspect of the invention, a system comprises a server having a database containing data associated with at least one or more predefined gestures, and at least one of a hardware and software component for executing an action based on the at least one or more predefined gestures. The hardware and software compares a first action in an interaction zone to a predefined template of a shape, and a second action, which interrupts a light source, to the at least one or more predefined gestures. The system validates the first action and the second action based on the comparison to the predefined template and the at least one or more predefined gestures. The system executes the action based on the validating of the first action and the second action.
  • In yet another aspect of the invention, a computer program product comprising a computer usable medium having readable program code embodied in the medium includes at least one component to perform the steps of the invention, as disclosed and recited herein.
  • In still another embodiment, a method comprises recognizing a first action of a first object and a second action of a second. The method further includes validating a movement comprising a combination of the first action and the second action by comparison to predefined gestures and executing a complex command based on the validating of the combination of the first action and the second action.
  • In a further aspect of the invention, a method for deploying an application for web searching which comprises providing a computer infrastructure. The computer infrastructure is operable to: project an image on a surface; sense a first action in a predefined interaction zone of the image; validate the first action and displaying a selected state; sense a gesture; validate that the gesture is associated with a pre-defined action; and execute the pre-defined action in response to the validated gesture.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an illustrative environment for implementing the steps in accordance with the invention;
  • FIG. 2 shows an embodiment of a system in accordance with the invention;
  • FIG. 3 is a representation of a range of motion of the system in a representative environment in accordance with an embodiment of the invention;
  • FIG. 4 represents a method to correct for distortion of a projected image on a surface or object;
  • FIG. 5 shows a system architecture according to an embodiment of the invention;
  • FIGS. 6 a and 6 b show a representative look-up table according to an embodiment of the invention;
  • FIG. 7 shows an illustrative template used in accordance with an embodiment of the invention; and
  • FIG. 8 is a representation of a swim lane diagram implementing steps according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
  • The invention is directed to a system and method for interacting with a projected display and more specifically to a system and method for interacting with a projected display utilizing gestures capable of executing menu driven commands and other complex command structures. The system and method can be implemented using a single computer, over any distributed network or stand-alone server, for example. In embodiments, the system and method is configured to be used as an interactive touch screen projected onto any surface, and which allows the user to perform and/or execute any command on the interactive touch screen surface without the need for a peripheral device such as, for example, a mouse or keyboard. Accordingly, the system and method is configured to provide device-free, non-tethered interaction with a display projected on any number of different surfaces, objects and/or areas in an environment.
  • The system and method of the invention projects displays on different surfaces such as, for example, walls, desks, presentation boards and the like. In implementations, the system and method allows complex commands to be executed such as, for example, opening a new file using a drag down menu, or operations such as cutting, copying, pasting or other commands that require more than a single command step. It should be understood, though, that the system and method may also implement and execute single step commands.
  • In embodiments, the commands are executed using gestures, which are captured, reconciled and executed by a computer. The actions to be executed, in one implementation, require two distinct actions by the user as implemented by a user's hands, pointers of some kind or any combination thereof. Thus, the system and method of the invention does not require any special devices to execute the requested commands and, accordingly, is capable of sensing and supporting forms of interaction such as hand gestures and/or motion of objects, etc. to perform such complex operations.
  • In embodiments, the system and method can be implemented using, for example, the Everywhere Display™, manufactured and sold by International Business Machines Corp. (Everywhere Display™ and IBM are trademarks of IBM Corp. in the United States, other countries, or both.) By way of example, the Everywhere Display can provide computer access in public spaces, facilitate navigation in buildings, localize resources in a physical space, bring computational resources to different areas of an environment, and facilitate the reconfiguration of the workplace.
  • FIG. 1 shows an illustrative environment 10 for managing the processes in accordance with the invention. To this extent, the environment 10 includes a computer infrastructure 12 that can perform the processes described herein. In particular, the computer infrastructure 12 includes a computing device 14 that comprises a management system 30, which makes computing device 14 operable to perform complex commands using gestures in accordance with the invention, e.g., process described herein. The computing device 14 includes a processor 20, a memory 22A, an input/output (I/O) interface 24, and a bus 26. Further, the computing device 14 is in communication with an external I/O device/resource 28 and a storage system 22B.
  • In general, the processor 20 executes computer program code, which is stored in memory 22A and/or storage system 22B. While executing computer program code, the processor 20 can read and/or write data from look-up tables which are the basis for the execution of the commands to be performed on the computer, to/from memory 22A, storage system 22B, and/or I/O interface 24. The bus 26 provides a communications link between each of the components in the computing device 14. The I/O device 28 can comprise any device that enables an individual to interact with the computing device 14 or any device that enables the computing device 14 to communicate with one or more other computing devices using any type of communications link.
  • The computing device 14 can comprise any general purpose computing article of manufacture capable of executing computer program code installed thereon (e.g., a personal computer, server, handheld device, etc.). However, it is understood that the computing device 14 is only representative of various possible equivalent computing devices that may perform the processes described herein. To this extent, in embodiments, the functionality provided by computing device 14 can be implemented by a computing article of manufacture that includes any combination of general and/or specific purpose hardware and/or computer program code. In each embodiment, the program code and hardware can be created using standard programming and engineering techniques, respectively.
  • Similarly, the computer infrastructure 12 is only illustrative of various types of computer infrastructures for implementing the invention. For example, in embodiments, the computer infrastructure 12 comprises two or more computing devices (e.g., a server cluster) that communicate over any type of communications link, such as a network, a shared memory, or the like, to perform the process described herein. Further, while performing the process described herein, one or more computing devices in the computer infrastructure 12 can communicate with one or more other computing devices external to computer infrastructure 12 using any type of communications link. The communications link can comprise any combination of wired and/or wireless links; any combination of one or more types of networks (e.g., the Internet, a wide area network, a local area network, a virtual private network, etc.); and/or utilize any combination of transmission techniques and protocols. As discussed herein, the management system 30 enables the computer infrastructure 12 to recognize gestures and execute associated commands.
  • In embodiments, the invention provides a business method that performs the process steps of the invention on a subscription, advertising, and/or fee basis. That is, a service provider, such as a Solution Integrator, could offer to perform the processes described herein. In this case, the service provider can create, maintain, and support, etc., a computer infrastructure that performs the process steps of the invention for one or more customers. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
  • FIG. 2 shows an embodiment of the system of the invention. As shown in FIG. 2, the system is generally depicted as reference numeral 100 and comprises a projector 110 (e.g., LCD projector) and a computer-controlled pan/tilt mirror 120. The projector 110 is connected to the display output of a computer 130, which also controls the mirror 120. In one non-limiting illustrative example, the light of the projector 110 can be directed in any direction within the range of approximately 60 degrees in the vertical axis and 230 degrees in the horizontal axis. Those of skill in the art should understand that other ranges are contemplated by the invention such as, for example, a range of 360 degrees in the horizontal and/or vertical axis. In embodiments, using the above ranges, the system 100 is capable of projecting a graphical display on most parts of all walls and almost all of the floor or other areas of a room. In embodiments, the projector 110 is a 1200 lumens LCD projector.
  • Still referring to FIG. 2, a camera 140 is also connected to the computer 130 and is configured to capture gestures or motions of the user and provide such gestures or motions to the computer 130 for reconciliation and execution of commands (as discussed in greater detail below). The camera 140 is preferably a CCD based camera which is configured and located to capture motions and the like of the user. The camera 140 and other devices may be connected to the computer via any known networking system as discussed above.
  • FIG. 3 is a representation of a range of motion of the system in a representative environment according to an embodiment of the invention. As shown in FIG. 3, the system 100 of the invention is configured to project a graphical display on walls, the floor, and a table, for example. Of course, depending on the range of the projector, the system 100 is capable of projecting images on most any surface within an environment thus transforming most any surface into an interactive display.
  • FIG. 4 represents a graphical methodology to correct for distortion of the projected image caused by oblique projection and by the shape of the projected surface. To make such correction, the image to be projected is inversely distorted prior to projection on the desired surface using, for example, standard computer graphics hardware to speed up the process of distortion control. By way of illustrative example, one methodology relies on the camera 140 and projector 110 having the same focal length. Therefore, to project an image obliquely without distortions it is sufficient to simulate the inverse process (i.e., viewing with a camera) in a virtual 3D-computer graphics world.
  • More specifically, as shown in FIG. 4, the system and method of the invention texture-maps the image to be displayed onto a virtual computer graphics 3D surface “VS” identical (minus a scale factor) to the actual surface “AS”. The view from the 3D virtual camera 140 should correspond exactly or substantially exactly to the view of the projector (if the projector was the camera) when:
      • the position and attitude of the surface in the 3D virtual space in relation to the 3D virtual camera is identical (minus a scale factor) to the relation between the real surface and the projector, and
      • the virtual camera has identical or substantially identical focal length to the projector.
  • In embodiments, a standard computer graphics board may be used to render the camera's view of the virtual surface and send the computed view to the projector 110. If the position and attitude of the virtual surface “VS” are correct, the projection of this view compensates the distortion caused by oblique projection or by the shape of the surface. Of course, an appropriate virtual 3D surface can be uniquely used and calibrated for each surface where images are projected. In embodiments, the calibration parameters of the virtual 3D surface may be determined manually by projecting a special pattern and interactively adjusting the scale, rotation and position of the virtual surface in the 3D world, and the “lens angle” of the 3D virtual camera.
  • FIG. 5 shows a current system architecture according to an embodiment of the invention. In embodiments, the system architecture includes a three-tier architecture comprising a services layer 300, an integration layer 310 and an application layer 320. In embodiments, each of the modules 300 a-300 f in the services layer 300 exposes a set of capabilities through a http/XML application programming interface (API). In embodiments, modules in the services layer 300 have no “direct” knowledge or dependence on other modules in the layer; however, the modules 300 a-300 f may share a common XML language along with a dialect for communication with each module in the services layer 300.
  • In embodiments, the services layer 300 includes six modules 300 a-300 f. For example, a vision interface module (vi) 300 a may be responsible for recognizing gestures and converting this information to the application (e.g., program being manipulated by the gestures). A projection module (pj) 300 b may handle the display of visual information (via the projector) on a specified surface while a camera module (sc) 300 c provides the video input (via the camera) from the surface of interest to the vision interface (vi) 300 a. The camera, as discussed above, will send the gestures and other motions of the user. Interaction with the interface by the user comprises orchestrating the vision interface 300 a, projection module 300 b and camera module 300 c through a sequence of synchronous and asynchronous commands, which are capable of being implemented by those of skill in the art. Other modules present in the services layer 300 include a 3D environment modeling module 300 d, a user localization module 300 e, and a geometric reasoning module 300 f.
  • The 3D environment modeling module 300 d can be a version of standard 3D modeling software. The 3D environment modeling module 300 d can support basic geometric objects built out of planar surfaces and cubes and allows importing of more complex models. In embodiments, the 3D environment modeling module 300 d stores the model in XML format, with objects as tags and annotations as attributes. The 3D environment modeling module 300 d is also designed to be accessible to the geometric reasoning module 300 f, as discussed below.
  • The geometric reasoning module 300 f is a geometric reasoning engine that operates on a model created by a modeling toolkit which, in embodiments, is a version of standard 3D modeling software. The geometric reasoning module 300 f enables automatic selection of the appropriate display and interaction zones (hotspots) based on criteria such as proximity of the zone to the user and non-occlusion of the zone by the user or by other objects. In this manner, gestures can be used to manipulate and execute program commands and/or actions. Applications or other modules can query the geometric reasoning module 300F through a defined XML interface.
  • In embodiments, the geometric reasoning module 300 f receives a user position and a set of criteria, specified as desired ranges of display zone properties, and returns all display zones which satisfy the specified criteria. The geometric reasoning module 300 f may also have a look-up table or access thereto for determining gestures of a user, which may be used to implement the actions or commands associated with a certain application. The properties for a display zone may include, amongst other properties, the following:
      • 1) Physical size of the display zone in some specified units such as inches or centimeters.
      • 2) Absolute orientation defined as the angle between the surface normal of the display zone and a horizontal plane.
      • 3) User proximity defined as the distance between the center of the user's head and the center of a display zone.
      • 4) Position of the user relative to the display zone, defined as the two angles to the user's head in a local spherical coordinate system attached to the display zone. This indicates, for example, whether the user is to the left or to the right of a display zone.
      • 5) Position of the display zone relative to the user, defined as the two angles to the display zone in a local spherical coordinate system attached to the user's head.
      • 6) Occlusion percentage, which is defined as the percentage of the total area of the display zone that is occluded with respect to a specified projector position and orientation.
      • 7) An occlusion mask, which is a bitmap that indicates the parts of a display zone occluded by other objects in the model or by the user.
  • The user localization module 300 e is, in embodiments, a real-time camera-based tracking to determine the position of the user in the environment, as well as, in embodiments, gestures of the user. In embodiments, the user localization module 300 e can be configured to track the user's motion to, for example, move the display to the user or, in further embodiments, recognize gestures of the user for implementing actions or commands.
  • In embodiments, the tracking technique is based on motion, shape, and/or flesh-tone cues. In embodiments, a differencing operation on consecutive frames of the incoming video can be performed. A morphological closing operation then removes noise and fills up small gaps in the detected motion regions. A standard contour- tracing algorithm then yields the bounding contours of the segmented regions. The contours are smoothed and the orientation and curvature along the contour is computed. The shape is analyze for each contour to check if it could be a head or other body part or object of interest, which is tracked by the system and method of the invention.
  • In the example of a head, the system looks for curvature changes corresponding to a head-neck silhouette (e.g., concavities at the neck points and convexity at the top of the head). In embodiments, sufficient flesh-tone color within the detected head region is detected by matching the color of each pixel within the head contour with a model of flesh tone colors in normalized r-g space. This technique detects multiple heads in real time. In embodiments, multiple cameras with overlapping views to triangulate and estimate the 3D position of the user are possible. This same technique can be used to recognize gestures in order for the user to interact with the display, e.g., provide complex commands.
  • In embodiments, the integration layer 310 provides a set of classes that enable a JAVA application to interact with the services. (Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.) The integration layer 310, in embodiments, contains a set of JAVA wrapper objects for all objects and commands, along with classes enabling synchronous and asynchronous communication with modules in the services layer 300. The integration layer 310, in embodiments, mediates the interaction among the services layer modules 300 a-300 f. For example, through a single instruction to the interaction manager 310 a, a JAVA application can start an interaction that sends commands to the vision interface, the projection module and the mirror defining, instantiating, activating, and managing a complex interactive display interaction. Similarly, the integration layer 310, for example, can coordinate the geometric reasoning module and the 3D environment modeler in a manner that returns the current user position along with all occluded surfaces to the application at a specified interval.
  • In embodiments, the application layer 320 comprises a set of classes and tools for defining and running JAVA applications and a repository of reusable interactions. In embodiments, each interaction is a reusable class that is available to any application. An application class, for example, is a container for composing multiple interactions, maintaining application state during execution, and controlling the sequence of interactions through the help of a sequence manager 320 a. Other tools may also be implemented such as, for example, a calibrator tool that allows a developer to calibrate the vision interface module 300 a, the projection module 300 b and the camera module 300 c for a particular application.
  • In embodiments, the user interacts with the projected display by using hand gestures over the projected surface, as if the hands, for example, were a computer mouse. Techniques described above, such as, for example, using the geometric reasoning module 300 f or the user localization module 300 e can be implemented to recognize such gesturing. By way of non-limiting illustration, the geometric reasoning module 300 f may use an occlusion mask, which indicates the parts of a display zone occluded by objects such as, for example, hand gestures of the user.
  • More specifically, in embodiments, the camera may perform three basic steps: (i) detecting when the user is pointing; (ii) tracking where the user is pointing; and (iii) detecting salient events such as a button touch from the pointing trajectory and gestures of the user. This may be performed, for example, by detecting an occlusion of the projected image over a certain zone, such as, for example, an icon or pull down menu. This information is then provided to the computer, which then reconciles such gesture with a look-up table, for example.
  • FIGS. 6 a and 6 b show a representative look-up table according to an embodiment of the invention. Specifically, it is shown that many complex commands can be executed using gestures such as, for example, a single left click of the mouse by the user moving his or her hand in a clockwise rotation. Other gestures are also contemplated by the invention such as those shown in the look-up tables of FIGS. 6 a and 6 b. It should be understood, though, that the gestures shown in FIGS. 6 a and 6 b should be considered merely illustrative examples.
  • As a further example, the invention further contemplates that a complex command can be executed based on a combination of movements by two (or more) objects, such as, for example, both of the user's hands. In this embodiment, the system and method of the invention would attempt to reconcile and/or verify a motion (gesture) of each object, e.g., both hands, using the look-up table of FIGS, 6 a and 6 b, for example. If both of the motions cannot be independently verified in the look-up table, for example, the system and method would attempt to reconcile and/or verify both of the motions using a look-up table populated with actions associated with combination motions. By way of one illustration, an “S” motion of both hands, which are motions not recognized, independently, may be a gesture for taking an action such as, requesting insertion of a “watermark” in a word processing application. It should be recognized by those of skill in the art that all actions, whether for a single motion or combination of motions, etc. may be populated in a single look-up table or multiple look-up tables, without any limitations.
  • FIG. 7 shows a template which may be implemented in embodiments of the invention. Even though the appearance of an object will change as it moves across the projected image, it will create a region of changed pixels that retains the basic shape of the moving object. To find pointing fingertips, for example, each video frame is subtracted from the frame before it, removing noise with simple computational morphology, and then convolving a fingertip template “T” over the difference image using a matching function. If the template “T” does not match well in the image, it can be assumed that the user is not pointing or gesturing.
  • The fingertip template of FIG. 7 is kept short, in embodiments, so that it will match fingertips that extend only slightly beyond their neighbors and will match fingertips within a wider range of angles. As a result, the template often matches well at several points in the image. It should be readily understood that other templates may also be used with the invention such as, for example, pointers and other objects. These templates may also be used for implementing and recognizing the gestures, referring back to FIGS. 6 a and 6 b.
  • FIG. 8 is a swim lane diagram showing steps of an embodiment of the invention. “Swim lane” diagrams may be used to show the relationship between the various “components” in the processes and to define the steps involved in the processes. FIG. 8 may equally represent a high-level block diagram of components of the invention implementing the steps thereof. The steps of FIG. 8 may be implemented on computer program code in combination with the appropriate hardware. This computer program code may be stored on storage media such as a diskette, hard disk, CD-ROM, DVD-ROM or tape, as well as a memory storage device or collection of memory storage devices such as read-only memory (ROM) or random access memory (RAM). Additionally, the computer program code can be transferred to a workstation over the Internet or some other type of network. The steps of FIG. 8 may also be implemented by the embodiment of FIG. 1.
  • In particular, FIG. 8 shows a process flow diagram, describing a scenario in which a user performs a series of actions using the gesture based user interaction grammar provided herein. At step 800, a user approaches a wall or other surface on which the system of the invention has projected a User Interface (UI) for a given application. At step 805, the user is recognized by the camera as they disturb the field of vision for the camera. At step 810, the user opts to open a menu from an icon in the UI via a right-click action. In embodiments, the user selects the icon with dominant hand (e.g., left hand). In one example, button touches are detected by examining the hand trajectory for several specific patterns that indicate this type of motion.
  • At step 815, the camera recognizes the disturbance of the “hotspot” (zone) associated to the selected icon, and calls the system to validate that the shape of the disturbance is identified in the template. At step 820, a determination is made to establish if the shape of the disturbing object is a valid shape in the template. If not, then at step 825, no action is taken; however, as described above, in embodiments, the system may recognize a second disturbance or gesture, at which time the system will make a determination that the combination of the first and second motions (e.g., disturbances) are a unique, valid gesture for an action to be taken.
  • If a valid shape is found at step 825, then the system displays the selected state of the selected icon at step 830. In an alternative embodiment, the system may recognize two gesture simultaneously, at which time, the system will make a determination as to whether the combination of gestures is associated with an action. If so, an appropriate action will be taken. This same or similar processing may continue with other examples.
  • At step 835, after successful display of the selected state of the icon, at step 830, the user uses the non-dominant hand (e.g., right hand) to articulate the gesture associated to a “right-click” action, for example (counter-clockwise rotation, see look-up table of FIGS. 6 a and 6 b). At step 840, the camera recognizes the articulation of the gesture, and at step 845, the system performs lookup to validate that the gesture resides in the system and is associated to an action.
  • At step 850, a determination is made as to whether the gesture is associated to an action. If there is no associated action, the system will revert to step 825 and take no action. If there is an associated action, at step 855, the system will execute the action (e.g., display open menu). Thus, after the system successfully identifies the articulated gesture, the system displays the appropriate action (e.g., opening a menu associated to the initially selected icon).
  • At step 860, the user selects from one of “X” number of possible navigational menu options. At step 865, the camera recognizes the disturbance of the hotspot (interaction zone) associated to the selected menu item, and calls to validate that the shape of the disturbance is identified in the template. At step 870, a determination is made as to whether the shape of the disturbing object is a valid shape in the template. If not recognized, then the system reverts back to step 825 and takes no action. If the gesture is valid (recognized), then at step 875, the system displays the selected state of the selected menu item.
  • At step 880, after successful display of the selected state of the menu item, the user uses the non-dominant hand, for example, to articulate the gesture associated to a “single left-click” action (single clockwise rotation, see look-up table of FIGS. 6 a and 6 b). At steps 885 and 890, the camera recognizes the articulation of the gesture, and the system performs lookup to validate that the gesture resides in the system and is associated to an action.
  • At step 895, the system makes a determination if the gesture is associated with an action. If not, the system again reverts back to step 825. If there is an associated action, at step 900, the system execute the associated action (navigate user to associated screen in the UI, in this case). The process then ends at “E”.
  • In a more generalized embodiment, a user points to a particular zone within the display area, e.g., a certain application. The system of the invention would recognize such action by the methods noted above. In embodiments, once the system recognizes the user within a zone and verifies that this is the proper zone, the system would “lock” that selection. Once locked, the user can then provide a gesture such as, for example, an “e” shape to exit the application, which will then verified and executed by the system of the invention.
  • While the invention has been described in terms of embodiments, those skilled in the art will recognize that the invention can be practiced with modifications and in the spirit and scope of the appended claims.

Claims (18)

1. A method, comprising:
recognizing with a camera a disturbance of a disturbing object in a display zone of a projected image;
making a determination as to whether a shape of the disturbing object is a valid shape in a template;
displaying a selected state of a selected item when the disturbing object is a valid shape;
recognizing with the camera a movement comprising a combination of two substantially simultaneous actions, including a first action of the disturbing object and a second action of an object in an interaction zone between the camera and the projected image;
validating the movement comprising the combination of the first action and the second action by comparing the first action and the second action to a plurality of predefined gestures; and
executing a command based on the validating of the combination of the first action and the second action.
2. The method of claim 1, wherein the display zone is one or more areas on the image.
3. The method of claim 1, wherein the executed command is a menu driven command.
4. The method of claim 1, wherein the predefined combination of two substantially simultaneous actions includes a trajectory for defined patterns that indicate at least one of a type and place of motion.
5. The method of claim 1, wherein when the disturbing object is a valid shape, the selected state of a selected icon or menu item in a user interface of the projected image is displayed.
6. The method of claim 5, wherein a look-up table associates the plurality of predefined gestures with a respective plurality of commands, the plurality of pre-defined gestures comprising:
a single left click gesture that executes a single left mouse click selection;
a double left click gesture that executes double left mouse click selection;
a single right click gesture that executes a single right mouse click selection;
a copy gesture that executes a copy command;
a paste gesture that executes a paste command;
an undo gesture that executes an undo command; and
a redo gesture that executes a redo command.
7. The method of claim 6, wherein the plurality of pre-defined gestures further comprise:
a next gesture that takes a user to a next entry in a list;
a previous gesture that takes the user to a previous entry in the list;
a first entry gesture that takes the user to a first entry in the list;
a last entry gesture that takes the user to a last entry in the list;
a home gesture that takes the user to a home of an application; and
an exit gesture that executes an exit command and closes the application.
8. The method of claim 1, further comprising, before recognizing the disturbance in the display zone,
recognizing a user entering a field of vision of a camera; and
determining the proximity of the user to the user interface.
9. A system comprising a server having a database containing data associated with at least one or more predefined gestures, and at least one of a hardware and software component for executing an action based on the at least one or more predefined gestures, the hardware and software:
comparing a first action of a disturbing object, detected by a camera, in an interaction zone between the camera and a projected image to a predefined template of a shape;
recognizing a gesture detected by the camera, which interrupts a light source between the camera and the projected image, the gesture comprising a combination of two substantially simultaneous hand motions;
determining that the gesture corresponds to a predefined command of the user interface; and
executing the command corresponding to the gesture.
10. The system of claim 9, wherein the system includes an architecture comprising a services layer, an integration layer and an application layer.
11. The system of claim 10, wherein the services layer comprises at least:
a vision interface module responsible for recognizing the gesture and converting an appropriate action to an application;
a geometric reasoning module which enables automatic selection of a display based on proximity of the interaction zone to a user and non-occlusion of the interaction zone by the user or other object; and
a user localization module to determine a position of the user.
12. The system of claim 11, wherein one of the geometric reasoning module and user localization module recognize the first action and the second action.
13. The system of claim 9, wherein the at least one of a hardware and software component resides on a server provided by a service provider.
14. A computer program product comprising a computer usable storage medium having readable program code embodied in the storage medium, the computer program product includes at least one component to:
recognize a disturbance caused by a first finger of a first hand of the user in a display zone of a user interface image projected on a surface in the field of vision of the camera;
display a selected state in response to the recognized disturbance;
recognize a gesture interrupting a light source and associated with an action to be taken on or associated with the displayed selected state, the gesture comprising a combination of two substantially simultaneous motions by the first hand of the user and a second hand of the user;
determine that the gesture corresponds to a command of the user interface based on one or more look-up tables populated with a plurality of pre-defined hand motions and a respective plurality of commands of the user interface; and
execute the associated command of the user interface in response to the gesture.
15. The computer program product of claim 14, wherein the recognized gesture includes a motion of a non-tethered object between the source and the projected image.
16. The computer program product of claim 14, wherein the display zone is one or more areas on the projected image which are associated with an application program.
17. The computer program product of claim 14, wherein when the disturbance is a valid shape, the selected state of a selected item is displayed.
18. The computer program product of claim 14, wherein the executed function is a menu driven command.
US13/614,200 2006-10-25 2012-09-13 System and method for interacting with a display Abandoned US20130002539A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/614,200 US20130002539A1 (en) 2006-10-25 2012-09-13 System and method for interacting with a display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/552,811 US8356254B2 (en) 2006-10-25 2006-10-25 System and method for interacting with a display
US13/614,200 US20130002539A1 (en) 2006-10-25 2012-09-13 System and method for interacting with a display

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/552,811 Continuation US8356254B2 (en) 2006-10-25 2006-10-25 System and method for interacting with a display

Publications (1)

Publication Number Publication Date
US20130002539A1 true US20130002539A1 (en) 2013-01-03

Family

ID=39526718

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/552,811 Expired - Fee Related US8356254B2 (en) 2006-10-25 2006-10-25 System and method for interacting with a display
US13/614,200 Abandoned US20130002539A1 (en) 2006-10-25 2012-09-13 System and method for interacting with a display

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/552,811 Expired - Fee Related US8356254B2 (en) 2006-10-25 2006-10-25 System and method for interacting with a display

Country Status (1)

Country Link
US (2) US8356254B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080192021A1 (en) * 2007-02-08 2008-08-14 Samsung Electronics Co. Ltd. Onscreen function execution method for mobile terminal having a touchscreen
US20110164055A1 (en) * 2010-01-06 2011-07-07 Mccullough Ian Patrick Device, Method, and Graphical User Interface for Manipulating a Collection of Objects
US20130002538A1 (en) * 2008-12-22 2013-01-03 Mooring David J Gesture-based user interface for a wearable portable device
US20150089453A1 (en) * 2013-09-25 2015-03-26 Aquifi, Inc. Systems and Methods for Interacting with a Projected User Interface
US11368212B2 (en) * 2018-09-14 2022-06-21 Arizona Board Of Regents On Behalf Of The University Of Arizona Laser beam for external position control and traffic management of on-orbit satellites
US11620042B2 (en) 2019-04-15 2023-04-04 Apple Inc. Accelerated scrolling and selection

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8681100B2 (en) 2004-07-30 2014-03-25 Extreme Realty Ltd. Apparatus system and method for human-machine-interface
US8872899B2 (en) * 2004-07-30 2014-10-28 Extreme Reality Ltd. Method circuit and system for human to machine interfacing by hand gestures
EP1789928A4 (en) 2004-07-30 2011-03-16 Extreme Reality Ltd A system and method for 3d space-dimension based image processing
US20070285554A1 (en) 2005-10-31 2007-12-13 Dor Givon Apparatus method and system for imaging
US9046962B2 (en) 2005-10-31 2015-06-02 Extreme Reality Ltd. Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region
US8370383B2 (en) 2006-02-08 2013-02-05 Oblong Industries, Inc. Multi-process interactive systems and methods
US8972902B2 (en) * 2008-08-22 2015-03-03 Northrop Grumman Systems Corporation Compound gesture recognition
US7907125B2 (en) * 2007-01-05 2011-03-15 Microsoft Corporation Recognizing multiple input point gestures
EP2150893A4 (en) * 2007-04-24 2012-08-22 Oblong Ind Inc Proteins, pools, and slawx in processing environments
US8681104B2 (en) * 2007-06-13 2014-03-25 Apple Inc. Pinch-throw and translation gestures
WO2009018314A2 (en) 2007-07-30 2009-02-05 Perceptive Pixel, Inc. Graphical user interface for large-scale, multi-user, multi-touch systems
JP4829855B2 (en) * 2007-09-04 2011-12-07 キヤノン株式会社 Image projection apparatus and control method thereof
US10146320B2 (en) 2007-10-29 2018-12-04 The Boeing Company Aircraft having gesture-based control for an onboard passenger service unit
US20090109036A1 (en) * 2007-10-29 2009-04-30 The Boeing Company System and Method for Alternative Communication
CA2735992A1 (en) * 2008-09-04 2010-03-11 Extreme Reality Ltd. Method system and software for providing image sensor based human machine interfacing
US20100138797A1 (en) * 2008-12-01 2010-06-03 Sony Ericsson Mobile Communications Ab Portable electronic device with split vision content sharing control and method
US20100218100A1 (en) * 2009-02-25 2010-08-26 HNTB Holdings, Ltd. Presentation system
JP5091180B2 (en) * 2009-03-27 2012-12-05 ソニーモバイルコミュニケーションズ, エービー Mobile terminal device
KR101577106B1 (en) 2009-09-21 2015-12-11 익스트림 리얼리티 엘티디. Methods circuits apparatus and systems for human machine interfacing with an electronic appliance
US8878779B2 (en) 2009-09-21 2014-11-04 Extreme Reality Ltd. Methods circuits device systems and associated computer executable code for facilitating interfacing with a computing platform display screen
US9971807B2 (en) 2009-10-14 2018-05-15 Oblong Industries, Inc. Multi-process interactive systems and methods
US8549418B2 (en) * 2009-12-23 2013-10-01 Intel Corporation Projected display to enhance computer device use
CN102129151A (en) * 2010-01-20 2011-07-20 鸿富锦精密工业(深圳)有限公司 Front projection control system and method
JP5740822B2 (en) 2010-03-04 2015-07-01 ソニー株式会社 Information processing apparatus, information processing method, and program
US10410500B2 (en) 2010-09-23 2019-09-10 Stryker Corporation Person support apparatuses with virtual control panels
CN103238135B (en) * 2010-10-05 2017-05-24 思杰系统有限公司 Gesture support for shared sessions
KR101727040B1 (en) * 2010-10-14 2017-04-14 엘지전자 주식회사 An electronic device, a method for providing menu using the same
US8905551B1 (en) * 2010-12-23 2014-12-09 Rawles Llc Unpowered augmented reality projection accessory display device
US8845110B1 (en) * 2010-12-23 2014-09-30 Rawles Llc Powered augmented reality projection accessory display device
US8845107B1 (en) 2010-12-23 2014-09-30 Rawles Llc Characterization of a scene with structured light
US9721386B1 (en) 2010-12-27 2017-08-01 Amazon Technologies, Inc. Integrated augmented reality environment
US9508194B1 (en) 2010-12-30 2016-11-29 Amazon Technologies, Inc. Utilizing content output devices in an augmented reality environment
US9607315B1 (en) 2010-12-30 2017-03-28 Amazon Technologies, Inc. Complementing operation of display devices in an augmented reality environment
KR20140030138A (en) 2011-01-23 2014-03-11 익스트림 리얼리티 엘티디. Methods, systems, devices, and associated processing logic for generating stereoscopic images and video
US9723293B1 (en) * 2011-06-21 2017-08-01 Amazon Technologies, Inc. Identifying projection surfaces in augmented reality environments
US9292112B2 (en) * 2011-07-28 2016-03-22 Hewlett-Packard Development Company, L.P. Multimodal interface
US8751972B2 (en) * 2011-09-20 2014-06-10 Google Inc. Collaborative gesture-based input language
US8887043B1 (en) * 2012-01-17 2014-11-11 Rawles Llc Providing user feedback in projection environments
TWM450762U (en) * 2012-04-23 2013-04-11 shun-fu Luo All new one stroke operation control device
TWM439217U (en) * 2012-05-02 2012-10-11 shun-fu Luo All new ui-e1-stroke operation control device
US9241124B2 (en) * 2013-05-01 2016-01-19 Lumo Play, Inc. Content generation for interactive video projection systems
US9876966B2 (en) 2013-10-18 2018-01-23 Pixart Imaging Inc. System and method for determining image variation tendency and controlling image resolution
TWI532377B (en) * 2013-10-18 2016-05-01 原相科技股份有限公司 Image sesning system, image sensing method, eye tracking system, eye tracking method
CN104580943B (en) * 2013-10-28 2019-10-18 原相科技股份有限公司 Image sensing system and method and eyeball tracking system and method
US9993733B2 (en) 2014-07-09 2018-06-12 Lumo Interactive Inc. Infrared reflective device interactive projection effect system
CN105867818A (en) * 2016-03-30 2016-08-17 乐视控股(北京)有限公司 Terminal interaction control device
US11947978B2 (en) 2017-02-23 2024-04-02 Ab Initio Technology Llc Dynamic execution of parameterized applications for the processing of keyed network data streams
US10831509B2 (en) 2017-02-23 2020-11-10 Ab Initio Technology Llc Dynamic execution of parameterized applications for the processing of keyed network data streams
US10916065B2 (en) * 2018-05-04 2021-02-09 Facebook Technologies, Llc Prevention of user interface occlusion in a virtual reality environment
EP3865940B1 (en) * 2020-02-17 2023-10-25 Bayerische Motoren Werke Aktiengesellschaft 360 degree projection mapping device for a vehicle

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6222465B1 (en) * 1998-12-09 2001-04-24 Lucent Technologies Inc. Gesture-based computer interface
US6385331B2 (en) * 1997-03-21 2002-05-07 Takenaka Corporation Hand pointing device
US7036094B1 (en) * 1998-08-10 2006-04-25 Cybernet Systems Corporation Behavior recognition system
US7129927B2 (en) * 2000-03-13 2006-10-31 Hans Arvid Mattson Gesture recognition system
US7165029B2 (en) * 2002-05-09 2007-01-16 Intel Corporation Coupled hidden Markov model for audiovisual speech recognition
US7190518B1 (en) * 1996-01-22 2007-03-13 3Ality, Inc. Systems for and methods of three dimensional viewing
US7200266B2 (en) * 2002-08-27 2007-04-03 Princeton University Method and apparatus for automated video activity analysis
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7274800B2 (en) * 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US7379563B2 (en) * 2004-04-15 2008-05-27 Gesturetek, Inc. Tracking bimanual movements
US7598942B2 (en) * 2005-02-08 2009-10-06 Oblong Industries, Inc. System and method for gesture based control system
US7725547B2 (en) * 2006-09-06 2010-05-25 International Business Machines Corporation Informing a user of gestures made by others out of the user's line of sight
US7770136B2 (en) * 2007-01-24 2010-08-03 Microsoft Corporation Gesture recognition interactive feedback
US7877707B2 (en) * 2007-01-06 2011-01-25 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US7957554B1 (en) * 2002-12-31 2011-06-07 Cognex Technology And Investment Corporation Method and apparatus for human interface to a machine vision system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05317345A (en) 1992-05-18 1993-12-03 Kobe Steel Ltd Artificial joint
US5528263A (en) * 1994-06-15 1996-06-18 Daniel M. Platzker Interactive projected video image display system
US6266057B1 (en) * 1995-07-05 2001-07-24 Hitachi, Ltd. Information processing system
DE19708240C2 (en) * 1997-02-28 1999-10-14 Siemens Ag Arrangement and method for detecting an object in a region illuminated by waves in the invisible spectral range
JP3968477B2 (en) * 1997-07-07 2007-08-29 ソニー株式会社 Information input device and information input method
US6346933B1 (en) 1999-09-21 2002-02-12 Seiko Epson Corporation Interactive display presentation system
US6554434B2 (en) 2001-07-06 2003-04-29 Sony Corporation Interactive projection system
US7006055B2 (en) * 2001-11-29 2006-02-28 Hewlett-Packard Development Company, L.P. Wireless multi-user multi-projector presentation system
US7348963B2 (en) * 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
US6802611B2 (en) 2002-10-22 2004-10-12 International Business Machines Corporation System and method for presenting, capturing, and modifying images on a presentation board
US7775883B2 (en) * 2002-11-05 2010-08-17 Disney Enterprises, Inc. Video actuated interactive environment
US7576727B2 (en) * 2002-12-13 2009-08-18 Matthew Bell Interactive directed light/sound system
US20060033725A1 (en) * 2004-06-03 2006-02-16 Leapfrog Enterprises, Inc. User created interactive interface
US7519223B2 (en) * 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7190518B1 (en) * 1996-01-22 2007-03-13 3Ality, Inc. Systems for and methods of three dimensional viewing
US6385331B2 (en) * 1997-03-21 2002-05-07 Takenaka Corporation Hand pointing device
US7036094B1 (en) * 1998-08-10 2006-04-25 Cybernet Systems Corporation Behavior recognition system
US6222465B1 (en) * 1998-12-09 2001-04-24 Lucent Technologies Inc. Gesture-based computer interface
US7129927B2 (en) * 2000-03-13 2006-10-31 Hans Arvid Mattson Gesture recognition system
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7274800B2 (en) * 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US7165029B2 (en) * 2002-05-09 2007-01-16 Intel Corporation Coupled hidden Markov model for audiovisual speech recognition
US7200266B2 (en) * 2002-08-27 2007-04-03 Princeton University Method and apparatus for automated video activity analysis
US7957554B1 (en) * 2002-12-31 2011-06-07 Cognex Technology And Investment Corporation Method and apparatus for human interface to a machine vision system
US7379563B2 (en) * 2004-04-15 2008-05-27 Gesturetek, Inc. Tracking bimanual movements
US7598942B2 (en) * 2005-02-08 2009-10-06 Oblong Industries, Inc. System and method for gesture based control system
US7725547B2 (en) * 2006-09-06 2010-05-25 International Business Machines Corporation Informing a user of gestures made by others out of the user's line of sight
US7877707B2 (en) * 2007-01-06 2011-01-25 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US7770136B2 (en) * 2007-01-24 2010-08-03 Microsoft Corporation Gesture recognition interactive feedback

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Akira Utsumi et al.; Multiple-Hand-Gesture Tracking using Multiple Cameras, 1999, IEEE, pp. 473-478 *
James P. Mammen et al.; Simultaneous Tracking of Both Hands by Estimation of Erroneous Observations, 2001, Indian Institute of Technology, 2001 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080192021A1 (en) * 2007-02-08 2008-08-14 Samsung Electronics Co. Ltd. Onscreen function execution method for mobile terminal having a touchscreen
US9041681B2 (en) * 2007-02-08 2015-05-26 Samsung Electronics Co., Ltd. Onscreen function execution method for mobile terminal having a touchscreen
US9395913B2 (en) 2007-02-08 2016-07-19 Samsung Electronics Co., Ltd. Onscreen function execution method for mobile terminal having a touchscreen
US9641749B2 (en) 2007-02-08 2017-05-02 Samsung Electronics Co., Ltd. Onscreen function execution method for mobile terminal having a touchscreen
US20130002538A1 (en) * 2008-12-22 2013-01-03 Mooring David J Gesture-based user interface for a wearable portable device
US8576073B2 (en) * 2008-12-22 2013-11-05 Wimm Labs, Inc. Gesture-based user interface for a wearable portable device
US20110164055A1 (en) * 2010-01-06 2011-07-07 Mccullough Ian Patrick Device, Method, and Graphical User Interface for Manipulating a Collection of Objects
US8786639B2 (en) * 2010-01-06 2014-07-22 Apple Inc. Device, method, and graphical user interface for manipulating a collection of objects
US20150089453A1 (en) * 2013-09-25 2015-03-26 Aquifi, Inc. Systems and Methods for Interacting with a Projected User Interface
US11368212B2 (en) * 2018-09-14 2022-06-21 Arizona Board Of Regents On Behalf Of The University Of Arizona Laser beam for external position control and traffic management of on-orbit satellites
US11620042B2 (en) 2019-04-15 2023-04-04 Apple Inc. Accelerated scrolling and selection

Also Published As

Publication number Publication date
US8356254B2 (en) 2013-01-15
US20080143975A1 (en) 2008-06-19

Similar Documents

Publication Publication Date Title
US8356254B2 (en) System and method for interacting with a display
US10739865B2 (en) Operating environment with gestural control and multiple client devices, displays, and users
US6594616B2 (en) System and method for providing a mobile input device
US10296099B2 (en) Operating environment with gestural control and multiple client devices, displays, and users
US9659280B2 (en) Information sharing democratization for co-located group meetings
CN102541256B (en) There is the location-aware posture of visual feedback as input method
JP4933438B2 (en) A system for distributed information presentation and interaction.
US8159501B2 (en) System and method for smooth pointing of objects during a presentation
US9141937B2 (en) System for storage and navigation of application states and interactions
US20160358383A1 (en) Systems and methods for augmented reality-based remote collaboration
US20170038846A1 (en) Visual collaboration interface
US8659547B2 (en) Trajectory-based control method and apparatus thereof
JP2008519340A5 (en)
US11064784B2 (en) Printing method and system of a nail printing apparatus, and a medium thereof
US10824238B2 (en) Operating environment with gestural control and multiple client devices, displays, and users
Simões et al. Unlocking augmented interactions in short-lived assembly tasks
US10802664B2 (en) Dynamic layout design
Stafford-Fraser Video-augmented environments
Hardy Toolkit support for interactive projected displays
Matthews et al. Sketch-based interfaces: drawings to data
Sanders Development of a Portable Touchscreen using a Projector and Camera (s)

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE