US8854361B1 - Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information - Google Patents

Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information Download PDF

Info

Publication number
US8854361B1
US8854361B1 US14/058,865 US201314058865A US8854361B1 US 8854361 B1 US8854361 B1 US 8854361B1 US 201314058865 A US201314058865 A US 201314058865A US 8854361 B1 US8854361 B1 US 8854361B1
Authority
US
United States
Prior art keywords
computing device
video data
additional information
chemical structure
graphical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/058,865
Other versions
US20140267240A1 (en
Inventor
Robin Y. Smith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Revvity Signals Software Inc
Original Assignee
CambridgeSoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CambridgeSoft Corp filed Critical CambridgeSoft Corp
Priority to US14/058,865 priority Critical patent/US8854361B1/en
Assigned to CAMBRIDGESOFT CORPORATION reassignment CAMBRIDGESOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SMITH, ROBIN Y.
Priority to PCT/US2014/016249 priority patent/WO2014163777A1/en
Priority to EP14708174.9A priority patent/EP2973083B1/en
Priority to US14/471,456 priority patent/US11164660B2/en
Publication of US20140267240A1 publication Critical patent/US20140267240A1/en
Publication of US8854361B1 publication Critical patent/US8854361B1/en
Application granted granted Critical
Assigned to PERKINELMER INFORMATICS, INC. reassignment PERKINELMER INFORMATICS, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: CAMBRIDGESOFT CORPORATION
Assigned to REVVITY SIGNALS SOFTWARE, INC. reassignment REVVITY SIGNALS SOFTWARE, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: PERKINELMER INFORMATICS, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B45/00ICT specially adapted for bioinformatics-related data visualisation, e.g. displaying of maps or networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B5/00ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C10/00Computational theoretical chemistry, i.e. ICT specially adapted for theoretical aspects of quantum chemistry, molecular mechanics, molecular dynamics or the like
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16CCOMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
    • G16C20/00Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
    • G16C20/80Data visualisation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/10Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/022Centralised management of display operation, e.g. in a server instead of locally
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2380/00Specific applications
    • G09G2380/08Biomedical applications

Definitions

  • Chemical structure rendering software is widely used by research and educational institutions to depict chemical structures and chemical reactions of interest. Unlike chemical formulas or chemical names, structural formulas provide a graphical representation of the molecular structure. A graphical chemical structure representation is capable of indicating the arrangements of atoms in a way that a chemical formula cannot.
  • Bio sequence and large molecule rendering software is widely used by research and educational institutions to depict biological sequences, including nucleotide and amino acid sequences of interest.
  • a three-dimensional graphical representation can be extrapolated from a nucleotide and/or amino acid sequence to depict the arrangements of atoms, nucleotides, and/or amino acid residues in a way that a simple sequence (e.g., a nucleotide and/or amino acid sequence) cannot.
  • Augmented reality refers to augmenting the real world environment with computer-generated input such as graphics, video, or audio information.
  • digital objects can be overlaid on real world video data captured on the handheld computing device.
  • Examples of handheld device augmented reality include the LayarTM print media augmentation application by Layar of Amsterdam, the Netherlands, the Word Lens augmented reality translation application by Quest Visual of San Francisco, Calif., and the Wikitude® Augmented Reality mobile platform by Wikitude GmbH of Salzburg, Austria.
  • a computing device such as a hand-held smart phone, computerized eye glasses or tablet computer
  • a user captures a video image containing information regarding at least one of a chemical structure and a biological sequence.
  • the video image may capture an illustration of a chemical structure, such as a two-dimensional rendering of a chemical structure drawn on a piece of paper, white board, chalk board, or transparent drawing board as in a laboratory.
  • the video image may capture a written chemical name, biological species name, formula, or other visual indication of a chemical compound or biological species.
  • the computing device used to capture the video image accesses software configured to identify the captured chemical structure or biological sequence.
  • the software is further configured to augment the video image, prior to being displayed to the user (e.g., in a display area of the computing device or on a separate display), with information regarding the captured chemical structure or biological sequence.
  • the information can include the chemical name, biological species name, properties of the chemical compound or biological species, whether the identified chemical compound or biological species is available in a given lab or stock room, and (if identified in a lab or stock room) a storage location of the identified chemical compound or biological species.
  • the information can include a three-dimensional representation of the chemical structure or biological sequence.
  • the display containing the augmented video image in a preferred embodiment, is presented in substantially real-time, such that the video is captured, augmented with additional information pertaining to a chemical structure or biological sequence rendered therein, and displayed to the user while the user continues to capture the video image containing the image of the chemical structure or biological sequence.
  • the additional information may be positionally linked to the captured image of the chemical structure or biological sequence (or other graphical representation) being viewed.
  • the additional information may be superimposed on, or placed in proximity to, the video image of a two-dimensional graphical chemical structure captured within the video image.
  • Call-out boxes may be displayed on the screen prompting the user to choose to display particular additional information such as, in some examples, the name, storage location, and/or properties of the identified chemical structure or biological species.
  • the call-out boxes may be superimposed on the live video image being viewed such that it is clear to the user what portion of the captured video image has been identified by the software as a chemical structure or biological sequence.
  • the additional information may be displayed (i) on top of the video image of an identified graphical rendering of a chemical structure or biological sequence, (ii) in a call-out box pointing to the identified graphical rendering of the chemical structure or biological sequence, or (iii) superimposed on top of (e.g., at least partially overlapping) the identified graphical rendering of the chemical structure or biological sequence.
  • the additional information in some embodiments, is rendered partially transparent such that the captured video image is visible beneath the additional information.
  • the additional information in this circumstance, may optionally be rendered opaque responsive to selection by a user (e.g., receiving a touch input corresponding to a transparent call-out box rendered upon a touch screen device, etc.). If multiple chemical structure representations and/or biological sequence representations are identified, the position of the additional information (or displayed information prompts), in some embodiments, indicates which identified chemical structure or biological sequence correlates to which additional information.
  • a three-dimensional representation of the identified chemical structure or biological sequence may be added to the video image prior to display.
  • the position of the three-dimensional representation may correspond to the position of the identified chemical structure or biological sequence (e.g., it may be superimposed or partially superimposed).
  • the user may be provided the opportunity to interact with the three-dimensional representation.
  • the user may be able to rotate, resize, and/or relocate an added three-dimensional representation of the captured chemical structure or biological sequence as rendered upon the display.
  • the three-dimensional representation may be rotated, for example, as a function of the position of the mobile device (e.g., tilting the device results in tilting the three-dimensional representation).
  • the three-dimensional representation in some embodiments, rotates automatically (e.g., slowly spinning and/or tilting upon the display).
  • the systems, methods, and apparatus utilize or include a tablet computer, a mobile phone device, an augmented reality wearable computer, or any other computer device or system capable of capturing and presenting video data.
  • the systems, methods, and apparatus utilize or include a laptop computer, desktop computer, notebook computer or other computer device or system interfacing with a device capable of capturing and presenting video data (e.g., through a wired or wireless connection).
  • the systems, methods, and apparatus have applications in a wide variety of industries and environments that work with graphical representations of chemical structural formulas, such as laboratories, research facilities, and classroom environments.
  • One aspect of the invention relates to a method including receiving video data captured by a first computing device, wherein the video data includes a video image.
  • the method includes identifying, within the video image, by a processor of a second computing device, a graphical representation of one of a chemical structure and a biological sequence.
  • the method also includes matching, by the processor, additional information to the graphical representation.
  • the additional information includes at least one of (i) a three-dimensional representation of the chemical structure or biological sequence, (ii) a name of a) a chemical compound represented by the chemical structure, or b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and (iii) a plurality of properties of the chemical compound or biological species.
  • the method also includes augmenting, by the processor, the video image with at least one of (i) at least a portion of the additional information, and (ii) one or more controls configured, upon selection, to present at least a portion of the additional information.
  • the method further includes, after augmenting the video image, providing the video data for presentation upon a display controlled by the first computing device.
  • the video data is presented by the first computing device in substantially real time in relation to the capture of the video data by the first computing device.
  • the first computing device includes the second computing device.
  • the graphical representation of the chemical structure includes a two-dimensional drawing of the chemical structure.
  • augmenting the video image includes determining an orientation of the first computing device; and rendering the three-dimensional representation based in part upon the orientation of the first computing device.
  • the additional information further includes (iv) a storage location.
  • matching additional information to the graphical representation includes determining the storage location, wherein determining the storage location includes identifying the chemical compound or biological species within a catalog.
  • augmenting the video image includes rendering a first control of the one or more controls as a semi-transparent image overlaid upon the video image.
  • augmenting the video image includes superimposing the graphical representation with at least a portion of the additional information.
  • matching the additional information to the graphical representation includes comparing the graphical representation to one or more stored graphical representations. In some embodiments, matching the additional information to the graphical representation includes digitally refining the portion of the graphical representation prior to comparing the graphical representation to the one or more stored graphical representations.
  • the method also includes receiving, by the processor, an indication of selection of a first control of the one or more controls; and augmenting, by the processor, subsequent video data with the portion of the additional information related to the first control.
  • augmenting the subsequent video data includes adding audio data to the video data, wherein the audio data includes a verbal description of the additional information.
  • Another aspect of the invention relates to a processor and a memory having instructions stored thereon.
  • the instructions when executed by the processor, cause the processor to receive video data captured by a computing device, wherein the video data includes a video image; identify, within the video image, a visual identification of a chemical structure or a biological sequence; match additional information to the visual identification.
  • the additional information includes at least one of (i) a graphical representation of the chemical structure or biological sequence, (ii) a name of a) a chemical compound represented by the chemical structure, or b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and (iii) a plurality of properties of the chemical compound or biological species.
  • the instructions when executed by the processor, further cause the processor to augment the video image with at least one of (i) at least a portion of the additional information, and (ii) one or more controls configured, upon selection, to present at least a portion of the additional information. Augmenting the video image includes positionally linking the at least one of (i) the portion of the additional information and (ii) the one or more controls to the visual identification such that, upon movement of the visual identification within the video image, the at least one of (i) the portion of the additional information and (ii) the one or more controls undergo a corresponding motion.
  • the instructions when executed by the processor, further cause the processor to provide the video data for presentation upon a display, wherein the video data is presented in substantially real time in relation to the capture of the video data by the computing device.
  • the computing device includes the processor.
  • the visual identification includes a two-dimensional drawing of a chemical structure or biological sequence.
  • the instructions, when executed, cause the processor to: receive, from a user of the computing device, an indication of video freeze; and responsive to the indication of video freeze, provide for presentation a still image including a most recent augmented video data frame.
  • the instructions, when executed, cause the processor to: receive, after providing the still image, selection of a first control of the one or more controls; apply, responsive to selection of the first control, the portion of the additional information related to the first control to the still image to determine a second still image; and provide, for presentation upon the display, the second still image.
  • Another aspect of the invention relates to a non-transitory computer readable medium having instructions stored thereon.
  • the instructions when executed by a processor, cause the processor to: receive video data destined for display upon a computing device, wherein the video data includes a video image; identify, within the video image, a graphical representation of one of a chemical structure and a biological sequence; match additional information to the graphical representation, wherein the additional information is arranged in a plurality of feature categories; augment the video image with one or more controls configured, upon selection, to present at least a portion of the additional information belonging to a first category of the plurality of feature categories; after augmenting the video image, provide the video data for presentation upon a display controlled by the computing device; receive, responsive to selection of a first control of the one or more controls by a user of the computing device, an indication corresponding to the first category; and responsive to the indication, augment subsequent video data with at least a portion of the additional information belonging to the first category; and after augmenting the subsequent video data, provide the subsequent video data for presentation
  • the computing device includes the processor.
  • the video data was captured by the computing device; and the video data is presented by the computing device in substantially real time in relation to the capture of the video data by the computing device.
  • FIG. 1 is a block diagram of an example system for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
  • FIG. 2 is a flow chart of an example method for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
  • FIG. 3 is a screen shot of an example user interface demonstrating visual augmentation of a graphical rendering of a chemical structure representation
  • FIG. 4 is a flow chart of an example method for deriving and presenting multi-dimensional information related to a graphical rendering of a chemical structure representation or biological sequence representation;
  • FIG. 5 is a block diagram of an example network environment for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information
  • FIG. 6 is a block diagram of an example computing device and an example mobile computing device.
  • apparatus, systems, and methods of the claimed invention encompass variations and adaptations developed using information from the embodiments described herein. Adaptation and/or modification of the apparatus, systems, and methods described herein may be performed by those of ordinary skill in the relevant art.
  • biological sequence refers to the sequence of nucleotide or amino acid residues of a biological molecule (e.g., a DNA molecule, an RNA molecule, or a polypeptide).
  • a biological sequence can be graphically represented in various ways, e.g., textually by a sequence of letters (e.g., using a 1-letter nucleotide code or using a 1-letter or 3-letter amino acid code), or structurally (e.g., as a chemical structure, a ball-and-stick model, a ribbon diagram, a space-filling model, or an electrostatic model).
  • FIG. 1 is a block diagram of an example system 100 for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information.
  • the system 100 includes a computing device 102 executing a software application 124 for applying augmented reality data to captured video information by identifying either a chemical structure or a biological sequence within the captured video and offering augmentation features and additional information regarding the identified chemical structure or biological sequence.
  • the computing device 102 captures a video of a chemical structure 106 a drawn upon a piece of paper 104 a , identifies the chemical structure 106 a as being a particular chemical compound, matches additional information to the chemical compound for augmenting the video, and displays a video image 104 b of the piece of paper 104 a augmented with the additional information within a display area 112 of the computing device 102 .
  • the additional information for example, includes a two-dimensional chemical structure 106 b overlaying the hand-drawn chemical structure 106 a and a three-dimensional graphical representation 122 of the chemical structure 106 b presented adjacent to the two-dimensional chemical structure 106 b.
  • the software application 124 executing upon the computing device 102 accesses a remote server 108 (e.g., via a network 110 ) to conduct a portion of the operations to achieve the augmented video presented within the display 112 .
  • the software application 124 executing upon the computing device 102 may pass all or a portion of the video data to the server 108 .
  • the software application 124 executing upon the computing device 102 may pass user configuration settings (e.g., as saved in a configuration data store 126 ), user inputs, chemical structure information, chemical compound information, biological sequence information, biological species information, or other data to the server 108 .
  • the server 108 in response, may provide data for augmenting the video data and/or an augmented video image to the computing device 102 .
  • the server 108 may include an image identification engine 114 for identifying a graphical indication of a chemical structure or biological sequence within the video data and correlating the chemical structure to a chemical compound or the biological sequence to a biological species (e.g., DNA molecule, RNA molecule, polypeptide, etc.).
  • the chemical structure or biological sequence captured by the video data can include a two-dimensional graphical representation, a three-dimensional graphical representation, or another graphical indication of a chemical compound or biological species.
  • the image identification engine 114 may recognize a written name or formula of a biological species or chemical compound within the video data.
  • the chemical structure or biological sequence captured within the video data in some additional examples, may be illustrated or hand-drawn on a flat or curved surface such as, in some examples, a white board, chalk board, or transparent hood of a laboratory experiment station.
  • the server 108 may include an additional information matching engine 116 for matching additional information to the identified chemical compound or biological species, such as, in some examples, a name of the chemical compound or biological species, properties of the chemical compound or biological species, or a graphical representation (e.g., two dimensional, three dimensional, etc.) of the chemical structure or biological sequence.
  • the additional information includes a location of the chemical compound or biological species within a laboratory, or reaction information regarding an ongoing experiment involving the chemical structure or biological sequence.
  • the additional information is accessed by the additional information matching engine 116 from one or more databases, such as a data store 120 .
  • a portion of the additional information is accessed from a public repository, such as the PubChem Compound database maintained by the National Center for Biotechnology Information (NCBI), the molecular spectral databases maintained by the National Institute of Standards and Technology (NIST), the Genbank sequence database maintained by NCBI, or the UniProt protein sequence database maintained by the UniProt consortium.
  • a portion of the additional information is accessed from an electronic lab notebook (ELN) system.
  • the server 108 includes a three-dimensional view engine 118 for presenting and adjusting a three-dimensional graphical representation of the identified chemical compound or biological species.
  • the three-dimensional view engine 118 may be used to render the three-dimensional graphical representation 122 within the display 112 .
  • the three-dimensional view engine 118 modifies the orientation of the three-dimensional graphical representation 122 based upon orientation input, such as a direct user input (e.g., one or more controls linked to the three-dimensional graphical representation 122 for rotating or tilting the aspect of the three-dimensional graphical representation 122 ) or device orientation input (e.g., as gleaned from one or more orientation sensors of the computing device 102 such as a gyroscope or accelerometer).
  • orientation input such as a direct user input (e.g., one or more controls linked to the three-dimensional graphical representation 122 for rotating or tilting the aspect of the three-dimensional graphical representation 122 ) or device orientation input (e.g., as gleaned from one or more orientation sensors of the computing device 102 such as a gyroscope or accelerometer).
  • the server 108 includes a video augmenting engine 128 for building an augmented video for presentation within the display 112 .
  • the video augmenting engine 128 may merge the video captured by the computing device 102 with augmentation data such as, in some examples, a portion of the additional information (e.g., obtained at least in part from the data store 120 ), selectable controls for presenting additional information, and/or a two-dimensional or three-dimensional graphical representation of the chemical structure or biological sequence.
  • the video augmenting engine 128 presents a portion of the augmentation data as semi-transparent images overlaying the original video image. For example, the hand-drawn chemical structure 106 a may be partially visible beneath the two-dimensional chemical structure 106 b .
  • the video augmenting engine 128 presents a portion of the augmentation data in a manner that “pins” the augmentation data to the video image of the chemical structure 106 a .
  • a portion of the augmentation data may move with the position of the chemical structure 106 a within the video image.
  • Visual elements may tie the portion of the augmentation data to the chemical structure 106 a .
  • arrows, call-out boxes, or lead lines may create a visual connection between the augmentation data and the chemical structure 106 b . This may be especially useful, for example, if two or more chemical structures or biological sequences are illustrated within the video data of the display 112 . In this manner, the user could immediately interpret which augmentation data belonged to which chemical structure or biological sequence.
  • the software application 124 may be used to provide the user with multi-dimensional data regarding the chemical structure 106 .
  • the software application 124 may be configured to present a first set of options (e.g., as a default or based upon the user configuration settings in the configuration store 126 ) as one or more selectable controls (not illustrated), each control representing a category of information related to the chemical structure 106 .
  • the categories may include chemical properties, a storage location, or a name of the compound.
  • the software application 124 may update the augmentation data presented within the display 112 to include information identified as pertaining to the selected category.
  • the information may include text, graphics, video, and/or audio information.
  • one or more sub-categories may be presented to the user upon selection of a primary category. For example, upon selection of a properties category, sub-categories of physical properties and chemical properties may be presented. In this manner, a user may drill down to obtain more and more specific information regarding the chemical structure 106 .
  • the computing device 102 may be a tablet computer, laptop computer, desktop computer, an augmented reality wearable computer such as computerized glasses (e.g., Google® GlassTM by Google® Inc. of Mountain View, Calif.), or other computing device in communication with a video capture device and a display.
  • augmented reality wearable computer such as computerized glasses (e.g., Google® GlassTM by Google® Inc. of Mountain View, Calif.), or other computing device in communication with a video capture device and a display.
  • one or more of the image identification engine 114 , additional information matching engine 116 , three-dimensional view engine 118 , and video augmenting engine 128 may reside within the computing device 102 (e.g., as part of the software application 124 ).
  • the server 108 may include two or more servers, such as a server farm or cloud computing service for video augmentation. Further details and options for providing methods and systems for visually augmenting a graphical rendering of either a chemical structure representation or a biological sequence representation with multi-dimensional information are described below.
  • FIG. 2 is a flow chart of an example method 200 for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information.
  • the method 200 may be performed by the server 108 and/or the software application 124 , as described in relation to FIG. 1 .
  • the method 200 begins with receiving video data destined for presentation upon a user device ( 202 ).
  • the video data may be captured using a camera feature of computerized eye glasses or a handheld computing device such as the computing device 102 described in relation to FIG. 1 .
  • the video data may be captured using a video camera device connected to a personal computer, or the video data may be provided in real time from a separate computing device (e.g., broadcast a number of students' computing devices in a lecture hall).
  • the video includes a video image track.
  • the video data further includes an audio image track and/or a metadata track.
  • a graphical representation of either a chemical structure or a biological sequence is identified within the video image track of the video data ( 204 ).
  • the graphical representation may be identified by the image identification engine 114 , described in relation to FIG. 1 .
  • the chemical structure or biological sequence for example, may be a hand-drawn illustration or print illustration representing a chemical compound or biological species.
  • the chemical structure or biological sequence may be partially presented (e.g., one or more bonds missing, a portion of an atom identifier cut off), obscured, or presented upon a curved surface.
  • the chemical compound or biological species is digitally refined to obtain an image for comparison with known chemical structures or biological sequences. For example, once a suspected chemical structure or biological sequence has been identified, the portion of the image containing the suspected chemical structure may be pruned (e.g., stray lines or text removed), image-adjusted, or otherwise digitally altered in an attempt to match the graphical representation of the chemical structure or biological sequence captured within the video image to a chemical known chemical compound or biological species. In a particular example, two or more video frames may be reviewed (e.g., merged, appended) in relation to each other to identify the chemical compound or biological species.
  • a name of a chemical compound or biological species, a chemical formula, or another representation of a chemical compound or biological species may be identified.
  • augmentation data is matched to the chemical compound or biological species to augment the video image ( 206 ).
  • the augmentation data may be matched to the chemical compound or biological species, for example, by the additional information matching engine 116 described in relation to FIG. 1 .
  • the augmentation data includes image data such as a two-dimensional or three-dimensional representation of the chemical structure or biological sequence.
  • the augmentation data includes additional information such as properties of the chemical compound or biological species, a storage location of the chemical compound or biological species, or reaction information regarding an experiment involving the chemical compound or biological species.
  • the additional information for example, may be arranged in categories and sub-categories such that a user may drill down to obtain further details regarding the chemical compound or biological sequence.
  • the augmentation data includes control information such as one or more selectable call-out boxes configured, upon selection, to present a portion of the additional information.
  • an indication of device orientation is received ( 210 ).
  • data derived from an orientation sensor (e.g., gyroscope, accelerometer, etc.) of a handheld computing device may be provided to aid in the augmentation of the video data.
  • a visual representation of a portion of the additional information is adjusted responsive to the device orientation ( 212 ). For example, the orientation of text presented as additional information or written upon a selectable control may be adjusted.
  • a rotation and/or revolution of the three-dimensional graphical representation may be adjusted based upon the orientation of the device. For example, the three-dimensional graphical representation may tilt with the tilting of the handheld device.
  • the orientation of the three-dimensional graphical representation may be adjusted, for example, by the three-dimensional view engine 118 , described in relation to FIG. 1 .
  • a portion of the augmentation data is applied to the video data prior to presentation upon the user device ( 214 ).
  • the augmentation data may be applied to the video data, for example, by the video augmenting engine 128 , described in relation to FIG. 1 .
  • the video data for example, may be merged with the augmentation data to create an augmented video data for presentation by the user device.
  • a portion of the augmentation data is presented as semi-transparent images overlaying the original video image.
  • a portion of the augmentation data is presented in a manner that “pins” the augmentation data to the graphical representation of the chemical structure or biological sequence.
  • a portion of the augmentation data may move with the position of the graphical representation of the chemical structure or biological sequence.
  • Visual elements may tie a portion of the augmentation data to the graphical representation of the chemical structure or biological sequence. For example, arrows, call-out boxes, or lead lines may create a visual connection between the augmentation data and the graphical representation of the chemical structure or biological sequence.
  • user input may be received ( 216 ).
  • a user may select one of the selectable controls presented as augmentation data, or a user may interact with a user-selectable three-dimensional graphical representation of the chemical structure or biological sequence presented as augmentation data.
  • follow-on augmentation data is identified for augmenting the video data ( 218 ). For example, based upon the selection by the user of a category of additional information, corresponding image, text, and/or graphic data may be presented upon the video feed as augmentation data.
  • the user may be presented with one or more selectable sub-categories related to the selected category.
  • Whether or not follow-on augmentation data is identified it is determined whether an orientation change has been received ( 220 ).
  • the user alters the orientation of the computing device (e.g., physically tilts the device or rotates the display), causing updated orientation information to be received.
  • the user manipulates the three-dimensional graphical representation of the chemical structure or biological sequence, causing orientation data specific to the three-dimensional graphical representation to be received.
  • the visual representation of at least a portion of the augmentation data is adjusted responsive to the orientation change ( 212 ).
  • the augmentation data (and/or, if applicable, follow-on augmentation data) is applied to the subsequent (e.g., current) video data prior to presentation of the augmented subsequent video data upon the user device ( 214 ).
  • the method 200 continues to update and adjust as additional video data frames, user inputs, and orientation changes are received.
  • one or more of the steps of the method 200 may be performed in a different order, or one or more steps may be performed in parallel. In some implementations, one or more of the steps of the method 200 may be removed or modified, or one or more steps may be added to the method 200 , while staying within the scope and intent of the method 200 .
  • FIG. 3 is a screen shot 300 of an example user interface 304 , rendered upon a computing device 302 .
  • the user interface 304 demonstrates an example of visual augmentation of a graphical rendering of a chemical structure representation.
  • the user interface 304 may be presented in relation to capturing video data regarding the hand-drawn chemical structure 106 a upon the paper 104 a , as described in relation to FIG. 1 .
  • the image of the paper 104 a containing the hand-drawn chemical structure 106 a has been obscured by semi-transparent augmentation data, including a two-dimensional graphical representation 306 of the chemical structure 106 a and a series of selectable call-out boxes 308 identifying categories of additional information pertaining to the chemical structure 106 a such as a locations category call-out box 308 a , a properties category call-out box 308 b , and a name category call-out box 308 c .
  • augmentation data regarding the selected category may be presented to the user in the display area of the computing device 302 .
  • one or more additional controls are presented to the user.
  • the additional controls may be configured to present, upon selection, information regarding one or more sub-categories of the identified category.
  • the one or more additional controls may include a control configured to allow the user to freeze the video image containing the information regarding the identified category. For example, upon being presented with information relevant to the identified chemical structure or biological sequence, the user may desire to take a “snapshot” of the information for review without having to maintain video connection with the identified chemical structure or biological sequence (e.g., move the camera lens of the computing device 102 away from pointing at the paper 104 a ). In some implementations, although the video has been frozen on a snapshot image (e.g., current frame), the controls remain active for pursuing additional information regarding the chemical structure or biological sequence illustrated within the snapshot image.
  • a snapshot image e.g., current frame
  • the user interface 304 also includes a three-dimensional graphical representation 310 of the chemical structure 106 a , presented adjacent to the two-dimensional graphical representation 306 .
  • the orientation of the three-dimensional graphical representation 310 may automatically adjust (e.g., tilt or rotate).
  • the three-dimensional graphical representation 310 slowly revolves and/or rotates as a moving image within the video presented by the user interface 304 .
  • a search field 312 is presented.
  • the search field 312 may be used, for example, for searching for additional information related to the chemical structure 106 a.
  • FIG. 4 is a flow chart of an example method 400 for deriving and presenting multi-dimensional information related to a graphical rendering of a chemical structure representation or biological sequence representation.
  • the method 400 may be performed by the server 108 and/or the software application 124 , as described in relation to FIG. 1 .
  • the method 400 begins with receiving video data destined for presentation upon a user device ( 402 ).
  • the video data may be captured using a camera feature of computerized eye glasses or a handheld computing device such as the computing device 102 described in relation to FIG. 1 .
  • the video data may be captured using a video camera device connected to a personal computer, or the video data may be provided in real time from a separate computing device (e.g., broadcast a number of students' computing devices in a lecture hall).
  • the video includes a video image track.
  • the video further includes an audio image track and/or a metadata track.
  • a graphical representation of either a chemical structure or a biological sequence is identified within the video image track of the video data ( 404 ).
  • the graphical representation may be identified by the image identification engine 114 , described in relation to FIG. 1 .
  • the chemical structure or biological sequence for example, may be a hand-drawn illustration or print illustration representing a chemical formula or biological species.
  • the chemical structure or biological sequence may be partially presented (e.g., one or more bonds missing, a portion of an atom identifier cut off), obscured, or presented upon a curved surface.
  • the chemical compound or biological species is digitally refined to obtain an image for comparison with known chemical structures or biological sequences. For example, once a suspected chemical structure or biological sequence has been identified, the portion of the image containing the suspected chemical structure may be pruned (e.g., stray lines or text removed), image-adjusted, or otherwise digitally altered in an attempt to match the graphical representation of the chemical structure or biological sequence captured within the video image to a chemical known chemical compound or biological species. In a particular example, two or more video frames may be reviewed (e.g., merged, appended) in relation to each other to identify the chemical compound or biological species.
  • a name of a chemical compound or biological species, a chemical formula, or another representation of a chemical compound or biological species may be identified.
  • augmentation features are identified for adding information related to the chemical compound or biological species to the video data ( 406 ).
  • the augmentation features may be identified, for example, by the additional information matching engine 116 described in relation to FIG. 1 .
  • the augmentation features for example, may be arranged in categories and sub-categories such that a user may drill down to obtain further details regarding the chemical compound or biological species.
  • each augmentation feature is associated with a user interface control configured, upon selection, to present category information related to the chemical compound or biological species.
  • the video image including one or more controls for selecting the augmentation features, is provided for presentation to the user on the user computing device ( 408 ).
  • the controls may include any combination of call-out boxes, radio buttons, a configuration listing, a search box, or a voice activated listing of available options for further information.
  • the controls are pinned to the identified chemical structure or biological sequence, such that, as the identified chemical structure or biological sequence moves within the display region of the user computing device (e.g., due to movement of the camera relative to the chemical structure or biological sequence being captured) the controls move within the display to be positioned relative to the identified chemical structure or biological sequence.
  • one or more controls are made visible or rendered opaque upon initial input from a user (e.g., identification of a touch by the user upon a touch screen display, identification of a “click” by the user via an input tool, etc.).
  • the video data is augmented with information related to the selected feature ( 412 ). For example, upon receiving an indication of a selection of a particular control, information related to that control (e.g., one or more of text, graphic, video, and audio information) may be applied to subsequent video data prior to presentation of the subsequent video data.
  • information related to that control e.g., one or more of text, graphic, video, and audio information
  • one or more sub-features of the selected features are identified ( 414 ). For example, one or more sub-categories may be identified related to a selected information category. Upon identification of the sub-categories, in some implementations, the video data may be augmented with one or more controls for selecting the one or more sub-categories. If, instead, no sub-categories are available (e.g., the user has reached the finest granularity of information regarding the current topic), in some implementations, one or more controls regarding a higher level of category abstraction may be presented (e.g., akin to a “back” or “related information” option).
  • the augmented subsequent video data is presented with feature information and/or one or more controls for selecting sub-features of the identified feature ( 416 ).
  • the augmented subsequent video data may be presented in a manner similar to that described in relation to step 408 .
  • one or more of the steps of the method 400 may be performed in a different order, or one or more steps may be performed in parallel. In some implementations, one or more of the steps of the method 400 may be removed or modified, or one or more steps may be added to the method 400 , while staying within the scope and intent of the method 400 .
  • the cloud computing environment 500 may include one or more resource providers 502 a , 502 b , 502 c (collectively, 502 ).
  • Each resource provider 502 may include computing resources.
  • computing resources may include any hardware and/or software used to process data.
  • computing resources may include hardware and/or software capable of executing algorithms, computer programs, and/or computer applications.
  • exemplary computing resources may include application servers and/or databases with storage and retrieval capabilities.
  • Each resource provider 502 may be connected to any other resource provider 502 in the cloud computing environment 500 .
  • the resource providers 502 may be connected over a computer network 508 .
  • Each resource provider 502 may be connected to one or more computing device 504 a , 504 b , 504 c (collectively, 504 ), over the computer network 508 .
  • the cloud computing environment 500 may include a resource manager 506 .
  • the resource manager 506 may be connected to the resource providers 502 and the computing devices 504 over the computer network 508 .
  • the resource manager 506 may facilitate the provision of computing resources by one or more resource providers 502 to one or more computing devices 504 .
  • the resource manager 506 may receive a request for a computing resource from a particular computing device 504 .
  • the resource manager 506 may identify one or more resource providers 502 capable of providing the computing resource requested by the computing device 504 .
  • the resource manager 506 may select a resource provider 502 to provide the computing resource.
  • the resource manager 506 may facilitate a connection between the resource provider 502 and a particular computing device 504 .
  • the resource manager 506 may establish a connection between a particular resource provider 502 and a particular computing device 504 . In some implementations, the resource manager 506 may redirect a particular computing device 504 to a particular resource provider 502 with the requested computing resource.
  • FIG. 6 shows an example of a computing device 600 and a mobile computing device 650 that can be used to implement the techniques described in this disclosure.
  • the computing device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • the mobile computing device 650 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices.
  • the components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting.
  • the computing device 600 includes a processor 602 , a memory 604 , a storage device 606 , a high-speed interface 608 connecting to the memory 604 and multiple high-speed expansion ports 610 , and a low-speed interface 612 connecting to a low-speed expansion port 614 and the storage device 606 .
  • Each of the processor 602 , the memory 604 , the storage device 606 , the high-speed interface 608 , the high-speed expansion ports 610 , and the low-speed interface 612 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 602 can process instructions for execution within the computing device 600 , including instructions stored in the memory 604 or on the storage device 606 to display graphical information for a GUI on an external input/output device, such as a display 616 coupled to the high-speed interface 608 .
  • an external input/output device such as a display 616 coupled to the high-speed interface 608 .
  • multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 604 stores information within the computing device 600 .
  • the memory 604 is a volatile memory unit or units.
  • the memory 604 is a non-volatile memory unit or units.
  • the memory 604 may also be another form of computer-readable medium, such as a magnetic or optical disk.
  • the storage device 606 is capable of providing mass storage for the computing device 600 .
  • the storage device 606 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • Instructions can be stored in an information carrier.
  • the instructions when executed by one or more processing devices (for example, processor 602 ), perform one or more methods, such as those described above.
  • the instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums (for example, the memory 604 , the storage device 606 , or memory on the processor 602 ).
  • the high-speed interface 608 manages bandwidth-intensive operations for the computing device 600 , while the low-speed interface 612 manages lower bandwidth-intensive operations. Such allocation of functions is an example only.
  • the high-speed interface 608 is coupled to the memory 604 , the display 616 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 610 , which may accept various expansion cards (not shown).
  • the low-speed interface 612 is coupled to the storage device 606 and the low-speed expansion port 614 .
  • the low-speed expansion port 614 which may include various communication ports (e.g., USB, Bluetooth®, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 600 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 620 , or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer 622 . It may also be implemented as part of a rack server system 624 . Alternatively, components from the computing device 600 may be combined with other components in a mobile device (not shown), such as a mobile computing device 650 . Each of such devices may contain one or more of the computing device 600 and the mobile computing device 650 , and an entire system may be made up of multiple computing devices communicating with each other.
  • the mobile computing device 650 includes a processor 652 , a memory 664 , an input/output device such as a display 654 , a communication interface 666 , and a transceiver 668 , among other components.
  • the mobile computing device 650 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage.
  • a storage device such as a micro-drive or other device, to provide additional storage.
  • Each of the processor 652 , the memory 664 , the display 654 , the communication interface 666 , and the transceiver 668 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 652 can execute instructions within the mobile computing device 650 , including instructions stored in the memory 664 .
  • the processor 652 may be implemented as a chipset of chips that include separate and multiple analog and digital processors.
  • the processor 652 may provide, for example, for coordination of the other components of the mobile computing device 650 , such as control of user interfaces, applications run by the mobile computing device 650 , and wireless communication by the mobile computing device 650 .
  • the processor 652 may communicate with a user through a control interface 658 and a display interface 656 coupled to the display 654 .
  • the display 654 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
  • the display interface 656 may comprise appropriate circuitry for driving the display 654 to present graphical and other information to a user.
  • the control interface 658 may receive commands from a user and convert them for submission to the processor 652 .
  • an external interface 662 may provide communication with the processor 652 , so as to enable near area communication of the mobile computing device 650 with other devices.
  • the external interface 662 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
  • the memory 664 stores information within the mobile computing device 650 .
  • the memory 664 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
  • An expansion memory 674 may also be provided and connected to the mobile computing device 650 through an expansion interface 672 , which may include, for example, a SIMM (Single In Line Memory Module) card interface.
  • SIMM Single In Line Memory Module
  • the expansion memory 674 may provide extra storage space for the mobile computing device 650 , or may also store applications or other information for the mobile computing device 650 .
  • the expansion memory 674 may include instructions to carry out or supplement the processes described above, and may include secure information also.
  • the expansion memory 674 may be provide as a security module for the mobile computing device 650 , and may be programmed with instructions that permit secure use of the mobile computing device 650 .
  • secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
  • the memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below.
  • instructions are stored in an information carrier, that the instructions, when executed by one or more processing devices (for example, processor 652 ), perform one or more methods, such as those described above.
  • the instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory 664 , the expansion memory 674 , or memory on the processor 652 ).
  • the instructions can be received in a propagated signal, for example, over the transceiver 668 or the external interface 662 .
  • the mobile computing device 650 may communicate wirelessly through the communication interface 666 , which may include digital signal processing circuitry where necessary.
  • the communication interface 666 may provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others.
  • GSM voice calls Global System for Mobile communications
  • SMS Short Message Service
  • EMS Enhanced Messaging Service
  • MMS messaging Multimedia Messaging Service
  • CDMA code division multiple access
  • TDMA time division multiple access
  • PDC Personal Digital Cellular
  • WCDMA Wideband Code Division Multiple Access
  • CDMA2000 Code Division Multiple Access
  • GPRS General Packet Radio Service
  • a GPS (Global Positioning System) receiver module 670 may provide additional navigation- and location-related wireless data to the mobile computing device 650 , which may be used as appropriate by applications running on the mobile computing device 650 .
  • the mobile computing device 650 may also communicate audibly using an audio codec 660 , which may receive spoken information from a user and convert it to usable digital information.
  • the audio codec 660 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 650 .
  • Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 650 .
  • the mobile computing device 650 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 680 . It may also be implemented as part of a smart-phone 682 , personal digital assistant, or other similar mobile device.
  • implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
  • machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
  • the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
  • LAN local area network
  • WAN wide area network
  • the Internet the global information network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • embodiments of the present disclosure may be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture.
  • the article of manufacture may be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD ROM, a CD-RW, a CD-R, a DVD ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape.
  • the computer-readable programs may be implemented in any programming language. Some examples of languages that may be used include C, C++, or Java.
  • the software programs may be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file may then be stored on or in one or more of the articles of manufacture.
  • systems and methods for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information are provided. Having described certain implementations of visual augmentation of a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information, it will now become apparent to one of skill in the art that other implementations incorporating the concepts of the disclosure may be used. Therefore, the disclosure should not be limited to certain implementations, but rather should be limited only by the spirit and scope of the following claims.

Abstract

In certain embodiments, the invention relates to systems, methods, and apparatus that allow a user to visually augment a graphical rendering of either a chemical structure representation or a biological sequence representation with multi-dimensional information. A user captures a video image using a computing device such as a hand-held smart phone, computerized eye glasses or tablet computer. The video image includes information regarding at least one of a chemical structure and a biological sequence. A processor identifies, within the video image, a graphical representation of at least one of a chemical structure and a biological structure. The processor augments the graphical representation with additional information and provides the video data for presentation upon a display controlled by the computing device. The computing device presents the video data in substantially real time in relation to the capture of the video data by the computing device.

Description

RELATED APPLICATIONS
The present application claims priority to and the benefit of, U.S. Provisional Application No. 61/779,607 entitled “Visually Augmenting a Graphical Rendering of a Chemical Structure Representation or Biological Sequence Representation with Multi-Dimensional Information,” filed Mar. 13, 2013, the contents of which is hereby incorporated by reference in its entirety.
BACKGROUND
Chemical structure rendering software is widely used by research and educational institutions to depict chemical structures and chemical reactions of interest. Unlike chemical formulas or chemical names, structural formulas provide a graphical representation of the molecular structure. A graphical chemical structure representation is capable of indicating the arrangements of atoms in a way that a chemical formula cannot.
Biological sequence and large molecule rendering software is widely used by research and educational institutions to depict biological sequences, including nucleotide and amino acid sequences of interest. A three-dimensional graphical representation can be extrapolated from a nucleotide and/or amino acid sequence to depict the arrangements of atoms, nucleotides, and/or amino acid residues in a way that a simple sequence (e.g., a nucleotide and/or amino acid sequence) cannot.
Augmented reality refers to augmenting the real world environment with computer-generated input such as graphics, video, or audio information. In handheld device augmented reality, digital objects can be overlaid on real world video data captured on the handheld computing device. Examples of handheld device augmented reality include the Layar™ print media augmentation application by Layar of Amsterdam, the Netherlands, the Word Lens augmented reality translation application by Quest Visual of San Francisco, Calif., and the Wikitude® Augmented Reality mobile platform by Wikitude GmbH of Salzburg, Austria.
SUMMARY OF THE INVENTION
Described herein are various embodiments of systems, methods, and apparatus that allow a user to visually augment a graphical rendering of either a chemical structure representation or a biological sequence representation with multi-dimensional information. Initially, using a computing device such as a hand-held smart phone, computerized eye glasses or tablet computer, a user captures a video image containing information regarding at least one of a chemical structure and a biological sequence. In one example, the video image may capture an illustration of a chemical structure, such as a two-dimensional rendering of a chemical structure drawn on a piece of paper, white board, chalk board, or transparent drawing board as in a laboratory. Instead of a graphical rendering of a chemical structure, in another example, the video image may capture a written chemical name, biological species name, formula, or other visual indication of a chemical compound or biological species.
In an example embodiment, the computing device used to capture the video image accesses software configured to identify the captured chemical structure or biological sequence. The software is further configured to augment the video image, prior to being displayed to the user (e.g., in a display area of the computing device or on a separate display), with information regarding the captured chemical structure or biological sequence. The information, in some examples, can include the chemical name, biological species name, properties of the chemical compound or biological species, whether the identified chemical compound or biological species is available in a given lab or stock room, and (if identified in a lab or stock room) a storage location of the identified chemical compound or biological species. In another example, the information can include a three-dimensional representation of the chemical structure or biological sequence.
The display containing the augmented video image, in a preferred embodiment, is presented in substantially real-time, such that the video is captured, augmented with additional information pertaining to a chemical structure or biological sequence rendered therein, and displayed to the user while the user continues to capture the video image containing the image of the chemical structure or biological sequence. The additional information, for example, may be positionally linked to the captured image of the chemical structure or biological sequence (or other graphical representation) being viewed. For example, the additional information may be superimposed on, or placed in proximity to, the video image of a two-dimensional graphical chemical structure captured within the video image.
Call-out boxes may be displayed on the screen prompting the user to choose to display particular additional information such as, in some examples, the name, storage location, and/or properties of the identified chemical structure or biological species. The call-out boxes, for example, may be superimposed on the live video image being viewed such that it is clear to the user what portion of the captured video image has been identified by the software as a chemical structure or biological sequence. For example, the additional information may be displayed (i) on top of the video image of an identified graphical rendering of a chemical structure or biological sequence, (ii) in a call-out box pointing to the identified graphical rendering of the chemical structure or biological sequence, or (iii) superimposed on top of (e.g., at least partially overlapping) the identified graphical rendering of the chemical structure or biological sequence. The additional information, in some embodiments, is rendered partially transparent such that the captured video image is visible beneath the additional information. The additional information, in this circumstance, may optionally be rendered opaque responsive to selection by a user (e.g., receiving a touch input corresponding to a transparent call-out box rendered upon a touch screen device, etc.). If multiple chemical structure representations and/or biological sequence representations are identified, the position of the additional information (or displayed information prompts), in some embodiments, indicates which identified chemical structure or biological sequence correlates to which additional information.
A three-dimensional representation of the identified chemical structure or biological sequence may be added to the video image prior to display. The position of the three-dimensional representation may correspond to the position of the identified chemical structure or biological sequence (e.g., it may be superimposed or partially superimposed). In some embodiments, the user may be provided the opportunity to interact with the three-dimensional representation. For example, the user may be able to rotate, resize, and/or relocate an added three-dimensional representation of the captured chemical structure or biological sequence as rendered upon the display. The three-dimensional representation may be rotated, for example, as a function of the position of the mobile device (e.g., tilting the device results in tilting the three-dimensional representation). The three-dimensional representation, in some embodiments, rotates automatically (e.g., slowly spinning and/or tilting upon the display).
In various embodiments, the systems, methods, and apparatus utilize or include a tablet computer, a mobile phone device, an augmented reality wearable computer, or any other computer device or system capable of capturing and presenting video data. In further embodiments, the systems, methods, and apparatus utilize or include a laptop computer, desktop computer, notebook computer or other computer device or system interfacing with a device capable of capturing and presenting video data (e.g., through a wired or wireless connection). The systems, methods, and apparatus have applications in a wide variety of industries and environments that work with graphical representations of chemical structural formulas, such as laboratories, research facilities, and classroom environments.
One aspect of the invention relates to a method including receiving video data captured by a first computing device, wherein the video data includes a video image. The method includes identifying, within the video image, by a processor of a second computing device, a graphical representation of one of a chemical structure and a biological sequence. The method also includes matching, by the processor, additional information to the graphical representation. The additional information includes at least one of (i) a three-dimensional representation of the chemical structure or biological sequence, (ii) a name of a) a chemical compound represented by the chemical structure, or b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and (iii) a plurality of properties of the chemical compound or biological species. The method also includes augmenting, by the processor, the video image with at least one of (i) at least a portion of the additional information, and (ii) one or more controls configured, upon selection, to present at least a portion of the additional information. The method further includes, after augmenting the video image, providing the video data for presentation upon a display controlled by the first computing device. The video data is presented by the first computing device in substantially real time in relation to the capture of the video data by the first computing device.
In some embodiments, the first computing device includes the second computing device. In some embodiments, the graphical representation of the chemical structure includes a two-dimensional drawing of the chemical structure. In some embodiments, augmenting the video image includes determining an orientation of the first computing device; and rendering the three-dimensional representation based in part upon the orientation of the first computing device. In some embodiments, the additional information further includes (iv) a storage location. In some embodiments, matching additional information to the graphical representation includes determining the storage location, wherein determining the storage location includes identifying the chemical compound or biological species within a catalog. In some embodiments, augmenting the video image includes rendering a first control of the one or more controls as a semi-transparent image overlaid upon the video image. In some embodiments, augmenting the video image includes superimposing the graphical representation with at least a portion of the additional information. In some embodiments, matching the additional information to the graphical representation includes comparing the graphical representation to one or more stored graphical representations. In some embodiments, matching the additional information to the graphical representation includes digitally refining the portion of the graphical representation prior to comparing the graphical representation to the one or more stored graphical representations.
In some embodiments, the method also includes receiving, by the processor, an indication of selection of a first control of the one or more controls; and augmenting, by the processor, subsequent video data with the portion of the additional information related to the first control. In some embodiments, augmenting the subsequent video data includes adding audio data to the video data, wherein the audio data includes a verbal description of the additional information.
Another aspect of the invention relates to a processor and a memory having instructions stored thereon. The instructions, when executed by the processor, cause the processor to receive video data captured by a computing device, wherein the video data includes a video image; identify, within the video image, a visual identification of a chemical structure or a biological sequence; match additional information to the visual identification. The additional information includes at least one of (i) a graphical representation of the chemical structure or biological sequence, (ii) a name of a) a chemical compound represented by the chemical structure, or b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and (iii) a plurality of properties of the chemical compound or biological species. The instructions, when executed by the processor, further cause the processor to augment the video image with at least one of (i) at least a portion of the additional information, and (ii) one or more controls configured, upon selection, to present at least a portion of the additional information. Augmenting the video image includes positionally linking the at least one of (i) the portion of the additional information and (ii) the one or more controls to the visual identification such that, upon movement of the visual identification within the video image, the at least one of (i) the portion of the additional information and (ii) the one or more controls undergo a corresponding motion. The instructions, when executed by the processor, further cause the processor to provide the video data for presentation upon a display, wherein the video data is presented in substantially real time in relation to the capture of the video data by the computing device.
In some embodiments, the computing device includes the processor. In some embodiments, the visual identification includes a two-dimensional drawing of a chemical structure or biological sequence. In some embodiments, the instructions, when executed, cause the processor to: receive, from a user of the computing device, an indication of video freeze; and responsive to the indication of video freeze, provide for presentation a still image including a most recent augmented video data frame. In some embodiments, the instructions, when executed, cause the processor to: receive, after providing the still image, selection of a first control of the one or more controls; apply, responsive to selection of the first control, the portion of the additional information related to the first control to the still image to determine a second still image; and provide, for presentation upon the display, the second still image.
Another aspect of the invention relates to a non-transitory computer readable medium having instructions stored thereon. The instructions, when executed by a processor, cause the processor to: receive video data destined for display upon a computing device, wherein the video data includes a video image; identify, within the video image, a graphical representation of one of a chemical structure and a biological sequence; match additional information to the graphical representation, wherein the additional information is arranged in a plurality of feature categories; augment the video image with one or more controls configured, upon selection, to present at least a portion of the additional information belonging to a first category of the plurality of feature categories; after augmenting the video image, provide the video data for presentation upon a display controlled by the computing device; receive, responsive to selection of a first control of the one or more controls by a user of the computing device, an indication corresponding to the first category; and responsive to the indication, augment subsequent video data with at least a portion of the additional information belonging to the first category; and after augmenting the subsequent video data, provide the subsequent video data for presentation upon the display controlled by the computing device.
In some embodiments, the computing device includes the processor. In some embodiments, the video data was captured by the computing device; and the video data is presented by the computing device in substantially real time in relation to the capture of the video data by the computing device.
Elements of embodiments described with respect to a given aspect of the invention may be used in various embodiments of another aspect of the invention. For example, it is contemplated that features of dependent claims depending from one independent claim can be used in apparatus, articles, systems, and/or methods of any of the other independent claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, aspects, features, and advantages of the invention will become more apparent and may be better understood by referring to the following description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a block diagram of an example system for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information;
FIG. 2 is a flow chart of an example method for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information;
FIG. 3 is a screen shot of an example user interface demonstrating visual augmentation of a graphical rendering of a chemical structure representation;
FIG. 4 is a flow chart of an example method for deriving and presenting multi-dimensional information related to a graphical rendering of a chemical structure representation or biological sequence representation;
FIG. 5 is a block diagram of an example network environment for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information;
FIG. 6 is a block diagram of an example computing device and an example mobile computing device.
The features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.
DESCRIPTION
It is contemplated that apparatus, systems, and methods of the claimed invention encompass variations and adaptations developed using information from the embodiments described herein. Adaptation and/or modification of the apparatus, systems, and methods described herein may be performed by those of ordinary skill in the relevant art.
Throughout the description, where apparatus and systems are described as having, including, or comprising specific components, or where processes and methods are described as having, including, or comprising specific steps, it is contemplated that, additionally, there are apparatus and systems of the present invention that consist essentially of, or consist of, the recited components, and that there are processes and methods according to the present invention that consist essentially of, or consist of, the recited processing steps.
It should be understood that the order of steps or order for performing certain actions is immaterial so long as the invention remains operable. Moreover, two or more steps or actions may be conducted simultaneously.
As used herein, the term “biological sequence” refers to the sequence of nucleotide or amino acid residues of a biological molecule (e.g., a DNA molecule, an RNA molecule, or a polypeptide). A biological sequence can be graphically represented in various ways, e.g., textually by a sequence of letters (e.g., using a 1-letter nucleotide code or using a 1-letter or 3-letter amino acid code), or structurally (e.g., as a chemical structure, a ball-and-stick model, a ribbon diagram, a space-filling model, or an electrostatic model).
FIG. 1 is a block diagram of an example system 100 for visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information. The system 100 includes a computing device 102 executing a software application 124 for applying augmented reality data to captured video information by identifying either a chemical structure or a biological sequence within the captured video and offering augmentation features and additional information regarding the identified chemical structure or biological sequence. The computing device 102, for example, captures a video of a chemical structure 106 a drawn upon a piece of paper 104 a, identifies the chemical structure 106 a as being a particular chemical compound, matches additional information to the chemical compound for augmenting the video, and displays a video image 104 b of the piece of paper 104 a augmented with the additional information within a display area 112 of the computing device 102. The additional information, for example, includes a two-dimensional chemical structure 106 b overlaying the hand-drawn chemical structure 106 a and a three-dimensional graphical representation 122 of the chemical structure 106 b presented adjacent to the two-dimensional chemical structure 106 b.
In some embodiments, the software application 124 executing upon the computing device 102 accesses a remote server 108 (e.g., via a network 110) to conduct a portion of the operations to achieve the augmented video presented within the display 112. For example, the software application 124 executing upon the computing device 102 may pass all or a portion of the video data to the server 108. In another example, the software application 124 executing upon the computing device 102 may pass user configuration settings (e.g., as saved in a configuration data store 126), user inputs, chemical structure information, chemical compound information, biological sequence information, biological species information, or other data to the server 108. The server 108, in response, may provide data for augmenting the video data and/or an augmented video image to the computing device 102.
In some embodiments, the server 108 may include an image identification engine 114 for identifying a graphical indication of a chemical structure or biological sequence within the video data and correlating the chemical structure to a chemical compound or the biological sequence to a biological species (e.g., DNA molecule, RNA molecule, polypeptide, etc.). The chemical structure or biological sequence captured by the video data, in some examples, can include a two-dimensional graphical representation, a three-dimensional graphical representation, or another graphical indication of a chemical compound or biological species. In some embodiments, the image identification engine 114 may recognize a written name or formula of a biological species or chemical compound within the video data. The chemical structure or biological sequence captured within the video data, in some additional examples, may be illustrated or hand-drawn on a flat or curved surface such as, in some examples, a white board, chalk board, or transparent hood of a laboratory experiment station.
In some embodiments, the server 108 may include an additional information matching engine 116 for matching additional information to the identified chemical compound or biological species, such as, in some examples, a name of the chemical compound or biological species, properties of the chemical compound or biological species, or a graphical representation (e.g., two dimensional, three dimensional, etc.) of the chemical structure or biological sequence. In some implementations, the additional information includes a location of the chemical compound or biological species within a laboratory, or reaction information regarding an ongoing experiment involving the chemical structure or biological sequence. The additional information, in some implementations, is accessed by the additional information matching engine 116 from one or more databases, such as a data store 120. In some implementations, a portion of the additional information is accessed from a public repository, such as the PubChem Compound database maintained by the National Center for Biotechnology Information (NCBI), the molecular spectral databases maintained by the National Institute of Standards and Technology (NIST), the Genbank sequence database maintained by NCBI, or the UniProt protein sequence database maintained by the UniProt consortium. In some implementations, a portion of the additional information is accessed from an electronic lab notebook (ELN) system.
The server 108, in some implementations, includes a three-dimensional view engine 118 for presenting and adjusting a three-dimensional graphical representation of the identified chemical compound or biological species. The three-dimensional view engine 118, for example, may be used to render the three-dimensional graphical representation 122 within the display 112. In some embodiments, the three-dimensional view engine 118 modifies the orientation of the three-dimensional graphical representation 122 based upon orientation input, such as a direct user input (e.g., one or more controls linked to the three-dimensional graphical representation 122 for rotating or tilting the aspect of the three-dimensional graphical representation 122) or device orientation input (e.g., as gleaned from one or more orientation sensors of the computing device 102 such as a gyroscope or accelerometer).
In some embodiments, the server 108 includes a video augmenting engine 128 for building an augmented video for presentation within the display 112. The video augmenting engine 128, for example, may merge the video captured by the computing device 102 with augmentation data such as, in some examples, a portion of the additional information (e.g., obtained at least in part from the data store 120), selectable controls for presenting additional information, and/or a two-dimensional or three-dimensional graphical representation of the chemical structure or biological sequence. In some embodiments, the video augmenting engine 128 presents a portion of the augmentation data as semi-transparent images overlaying the original video image. For example, the hand-drawn chemical structure 106 a may be partially visible beneath the two-dimensional chemical structure 106 b. The video augmenting engine 128, in some implementations, presents a portion of the augmentation data in a manner that “pins” the augmentation data to the video image of the chemical structure 106 a. For example, should the user move the computing device 102 or the paper 104 such that the alignment of the chemical structure 106 a moves within the display 112, a portion of the augmentation data may move with the position of the chemical structure 106 a within the video image. Visual elements, in some embodiments, may tie the portion of the augmentation data to the chemical structure 106 a. For example, arrows, call-out boxes, or lead lines may create a visual connection between the augmentation data and the chemical structure 106 b. This may be especially useful, for example, if two or more chemical structures or biological sequences are illustrated within the video data of the display 112. In this manner, the user could immediately interpret which augmentation data belonged to which chemical structure or biological sequence.
In addition to presenting the three-dimensional graphical representation 122 and overlaying the hand-drawn chemical structure 106 a with the two-dimensional chemical structure 106 b, in some implementations, the software application 124 may be used to provide the user with multi-dimensional data regarding the chemical structure 106. For example, the software application 124 may be configured to present a first set of options (e.g., as a default or based upon the user configuration settings in the configuration store 126) as one or more selectable controls (not illustrated), each control representing a category of information related to the chemical structure 106. In some examples, the categories may include chemical properties, a storage location, or a name of the compound. Upon selecting a first selectable control, the software application 124 may update the augmentation data presented within the display 112 to include information identified as pertaining to the selected category. The information, in some examples, may include text, graphics, video, and/or audio information. In some embodiments, one or more sub-categories may be presented to the user upon selection of a primary category. For example, upon selection of a properties category, sub-categories of physical properties and chemical properties may be presented. In this manner, a user may drill down to obtain more and more specific information regarding the chemical structure 106.
Although illustrated in a particular example system 100, in other implementations, the computing device 102, shown here as a handheld computing device, may be a tablet computer, laptop computer, desktop computer, an augmented reality wearable computer such as computerized glasses (e.g., Google® Glass™ by Google® Inc. of Mountain View, Calif.), or other computing device in communication with a video capture device and a display.
Additionally, in other implementations, one or more of the image identification engine 114, additional information matching engine 116, three-dimensional view engine 118, and video augmenting engine 128 may reside within the computing device 102 (e.g., as part of the software application 124). In some implementations, the server 108 may include two or more servers, such as a server farm or cloud computing service for video augmentation. Further details and options for providing methods and systems for visually augmenting a graphical rendering of either a chemical structure representation or a biological sequence representation with multi-dimensional information are described below.
FIG. 2 is a flow chart of an example method 200 for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information. The method 200, in some implementations, may be performed by the server 108 and/or the software application 124, as described in relation to FIG. 1.
In some implementations, the method 200 begins with receiving video data destined for presentation upon a user device (202). The video data, for example, may be captured using a camera feature of computerized eye glasses or a handheld computing device such as the computing device 102 described in relation to FIG. 1. In other examples, the video data may be captured using a video camera device connected to a personal computer, or the video data may be provided in real time from a separate computing device (e.g., broadcast a number of students' computing devices in a lecture hall). The video includes a video image track. In some implementations, the video data further includes an audio image track and/or a metadata track.
In some implementations, a graphical representation of either a chemical structure or a biological sequence is identified within the video image track of the video data (204). The graphical representation, for example, may be identified by the image identification engine 114, described in relation to FIG. 1. The chemical structure or biological sequence, for example, may be a hand-drawn illustration or print illustration representing a chemical compound or biological species. In some implementations, the chemical structure or biological sequence may be partially presented (e.g., one or more bonds missing, a portion of an atom identifier cut off), obscured, or presented upon a curved surface. If the chemical structure or biological sequence is not complete or otherwise exactly matching a corresponding chemical compound or biological species, in some implementations, the chemical compound or biological species is digitally refined to obtain an image for comparison with known chemical structures or biological sequences. For example, once a suspected chemical structure or biological sequence has been identified, the portion of the image containing the suspected chemical structure may be pruned (e.g., stray lines or text removed), image-adjusted, or otherwise digitally altered in an attempt to match the graphical representation of the chemical structure or biological sequence captured within the video image to a chemical known chemical compound or biological species. In a particular example, two or more video frames may be reviewed (e.g., merged, appended) in relation to each other to identify the chemical compound or biological species.
In other implementations, rather than identifying a graphical representation of a chemical structure or biological sequence, a name of a chemical compound or biological species, a chemical formula, or another representation of a chemical compound or biological species may be identified.
In some implementations, augmentation data is matched to the chemical compound or biological species to augment the video image (206). The augmentation data may be matched to the chemical compound or biological species, for example, by the additional information matching engine 116 described in relation to FIG. 1. The augmentation data, in some implementations, includes image data such as a two-dimensional or three-dimensional representation of the chemical structure or biological sequence. In some implementations, the augmentation data includes additional information such as properties of the chemical compound or biological species, a storage location of the chemical compound or biological species, or reaction information regarding an experiment involving the chemical compound or biological species. The additional information, for example, may be arranged in categories and sub-categories such that a user may drill down to obtain further details regarding the chemical compound or biological sequence. To aid in obtaining the additional information, in some implementations, the augmentation data includes control information such as one or more selectable call-out boxes configured, upon selection, to present a portion of the additional information.
In some implementations, an indication of device orientation is received (210). For example, data derived from an orientation sensor (e.g., gyroscope, accelerometer, etc.) of a handheld computing device may be provided to aid in the augmentation of the video data. If the device orientation is known (210), in some implementations, a visual representation of a portion of the additional information is adjusted responsive to the device orientation (212). For example, the orientation of text presented as additional information or written upon a selectable control may be adjusted. If a three-dimensional graphical representation of the chemical structure or biological sequence has been identified as augmentation data, in some implementations, a rotation and/or revolution of the three-dimensional graphical representation may be adjusted based upon the orientation of the device. For example, the three-dimensional graphical representation may tilt with the tilting of the handheld device. The orientation of the three-dimensional graphical representation may be adjusted, for example, by the three-dimensional view engine 118, described in relation to FIG. 1.
In some implementations, a portion of the augmentation data is applied to the video data prior to presentation upon the user device (214). The augmentation data may be applied to the video data, for example, by the video augmenting engine 128, described in relation to FIG. 1. The video data, for example, may be merged with the augmentation data to create an augmented video data for presentation by the user device. In some embodiments, a portion of the augmentation data is presented as semi-transparent images overlaying the original video image. In some implementations, a portion of the augmentation data is presented in a manner that “pins” the augmentation data to the graphical representation of the chemical structure or biological sequence. For example, should the user move the computing device or the item including the graphical representation of the chemical structure or biological sequence such that the alignment of the chemical structure or biological sequence moves within the video image frame, a portion of the augmentation data may move with the position of the graphical representation of the chemical structure or biological sequence. Visual elements, in some embodiments, may tie a portion of the augmentation data to the graphical representation of the chemical structure or biological sequence. For example, arrows, call-out boxes, or lead lines may create a visual connection between the augmentation data and the graphical representation of the chemical structure or biological sequence.
In some implementations, user input may be received (216). For example, a user may select one of the selectable controls presented as augmentation data, or a user may interact with a user-selectable three-dimensional graphical representation of the chemical structure or biological sequence presented as augmentation data. If a user input other than orientation change is received (216), in some implementations, follow-on augmentation data is identified for augmenting the video data (218). For example, based upon the selection by the user of a category of additional information, corresponding image, text, and/or graphic data may be presented upon the video feed as augmentation data. Using the same example, rather than or in addition to the additional information, the user may be presented with one or more selectable sub-categories related to the selected category.
Whether or not follow-on augmentation data is identified, in some implementations, it is determined whether an orientation change has been received (220). In some implementations, the user alters the orientation of the computing device (e.g., physically tilts the device or rotates the display), causing updated orientation information to be received. In some implementations, the user manipulates the three-dimensional graphical representation of the chemical structure or biological sequence, causing orientation data specific to the three-dimensional graphical representation to be received. The visual representation of at least a portion of the augmentation data, in some implementations, is adjusted responsive to the orientation change (212).
Whether or not an orientation change was received at step 220, as described previously in relation to step (214), in some implementations, the augmentation data (and/or, if applicable, follow-on augmentation data) is applied to the subsequent (e.g., current) video data prior to presentation of the augmented subsequent video data upon the user device (214). The method 200, in some implementations, continues to update and adjust as additional video data frames, user inputs, and orientation changes are received.
Although described in relation to a series of particular steps, in some implementations, one or more of the steps of the method 200 may be performed in a different order, or one or more steps may be performed in parallel. In some implementations, one or more of the steps of the method 200 may be removed or modified, or one or more steps may be added to the method 200, while staying within the scope and intent of the method 200.
FIG. 3 is a screen shot 300 of an example user interface 304, rendered upon a computing device 302. The user interface 304 demonstrates an example of visual augmentation of a graphical rendering of a chemical structure representation. The user interface 304, for example, may be presented in relation to capturing video data regarding the hand-drawn chemical structure 106 a upon the paper 104 a, as described in relation to FIG. 1.
As illustrated within the user interface 304, the image of the paper 104 a containing the hand-drawn chemical structure 106 a has been obscured by semi-transparent augmentation data, including a two-dimensional graphical representation 306 of the chemical structure 106 a and a series of selectable call-out boxes 308 identifying categories of additional information pertaining to the chemical structure 106 a such as a locations category call-out box 308 a, a properties category call-out box 308 b, and a name category call-out box 308 c. Upon selection by a user of one of the selectable call-out boxes 308, for example, augmentation data regarding the selected category may be presented to the user in the display area of the computing device 302. In some implementations, upon selection by a user of one of the selectable call-out boxes 308, one or more additional controls are presented to the user. The additional controls, for example, may be configured to present, upon selection, information regarding one or more sub-categories of the identified category.
In some implementations, the one or more additional controls may include a control configured to allow the user to freeze the video image containing the information regarding the identified category. For example, upon being presented with information relevant to the identified chemical structure or biological sequence, the user may desire to take a “snapshot” of the information for review without having to maintain video connection with the identified chemical structure or biological sequence (e.g., move the camera lens of the computing device 102 away from pointing at the paper 104 a). In some implementations, although the video has been frozen on a snapshot image (e.g., current frame), the controls remain active for pursuing additional information regarding the chemical structure or biological sequence illustrated within the snapshot image.
The user interface 304 also includes a three-dimensional graphical representation 310 of the chemical structure 106 a, presented adjacent to the two-dimensional graphical representation 306. In some implementations, upon changing the orientation of the computing device 302, the orientation of the three-dimensional graphical representation 310 may automatically adjust (e.g., tilt or rotate). In some implementations, the three-dimensional graphical representation 310 slowly revolves and/or rotates as a moving image within the video presented by the user interface 304.
At the bottom of the user interface 304, a search field 312 is presented. The search field 312 may be used, for example, for searching for additional information related to the chemical structure 106 a.
FIG. 4 is a flow chart of an example method 400 for deriving and presenting multi-dimensional information related to a graphical rendering of a chemical structure representation or biological sequence representation. The method 400, in some implementations, may be performed by the server 108 and/or the software application 124, as described in relation to FIG. 1.
In some implementations, the method 400 begins with receiving video data destined for presentation upon a user device (402). The video data, for example, may be captured using a camera feature of computerized eye glasses or a handheld computing device such as the computing device 102 described in relation to FIG. 1. In other examples, the video data may be captured using a video camera device connected to a personal computer, or the video data may be provided in real time from a separate computing device (e.g., broadcast a number of students' computing devices in a lecture hall). The video includes a video image track. In some implementations, the video further includes an audio image track and/or a metadata track.
In some implementations, a graphical representation of either a chemical structure or a biological sequence is identified within the video image track of the video data (404). The graphical representation, for example, may be identified by the image identification engine 114, described in relation to FIG. 1. The chemical structure or biological sequence, for example, may be a hand-drawn illustration or print illustration representing a chemical formula or biological species. In some implementations, the chemical structure or biological sequence may be partially presented (e.g., one or more bonds missing, a portion of an atom identifier cut off), obscured, or presented upon a curved surface. If the chemical structure or biological sequence is not complete or otherwise exactly matching a corresponding chemical compound or biological species, in some implementations, the chemical compound or biological species is digitally refined to obtain an image for comparison with known chemical structures or biological sequences. For example, once a suspected chemical structure or biological sequence has been identified, the portion of the image containing the suspected chemical structure may be pruned (e.g., stray lines or text removed), image-adjusted, or otherwise digitally altered in an attempt to match the graphical representation of the chemical structure or biological sequence captured within the video image to a chemical known chemical compound or biological species. In a particular example, two or more video frames may be reviewed (e.g., merged, appended) in relation to each other to identify the chemical compound or biological species.
In other implementations, rather than identifying a graphical representation of a chemical structure or biological sequence, a name of a chemical compound or biological species, a chemical formula, or another representation of a chemical compound or biological species may be identified.
In some implementations, augmentation features are identified for adding information related to the chemical compound or biological species to the video data (406). The augmentation features may be identified, for example, by the additional information matching engine 116 described in relation to FIG. 1. The augmentation features, for example, may be arranged in categories and sub-categories such that a user may drill down to obtain further details regarding the chemical compound or biological species. To aid in obtaining the additional information, in some implementations, each augmentation feature is associated with a user interface control configured, upon selection, to present category information related to the chemical compound or biological species.
In some implementations, the video image, including one or more controls for selecting the augmentation features, is provided for presentation to the user on the user computing device (408). The controls, in some examples, may include any combination of call-out boxes, radio buttons, a configuration listing, a search box, or a voice activated listing of available options for further information. In some implementations, the controls are pinned to the identified chemical structure or biological sequence, such that, as the identified chemical structure or biological sequence moves within the display region of the user computing device (e.g., due to movement of the camera relative to the chemical structure or biological sequence being captured) the controls move within the display to be positioned relative to the identified chemical structure or biological sequence. In some implementations, one or more controls are made visible or rendered opaque upon initial input from a user (e.g., identification of a touch by the user upon a touch screen display, identification of a “click” by the user via an input tool, etc.).
In some implementations, if a feature is selected by the user (410), the video data is augmented with information related to the selected feature (412). For example, upon receiving an indication of a selection of a particular control, information related to that control (e.g., one or more of text, graphic, video, and audio information) may be applied to subsequent video data prior to presentation of the subsequent video data.
In some implementations, in lieu of or in addition to presenting information related to the selected control, one or more sub-features of the selected features are identified (414). For example, one or more sub-categories may be identified related to a selected information category. Upon identification of the sub-categories, in some implementations, the video data may be augmented with one or more controls for selecting the one or more sub-categories. If, instead, no sub-categories are available (e.g., the user has reached the finest granularity of information regarding the current topic), in some implementations, one or more controls regarding a higher level of category abstraction may be presented (e.g., akin to a “back” or “related information” option).
In some implementations, the augmented subsequent video data is presented with feature information and/or one or more controls for selecting sub-features of the identified feature (416). The augmented subsequent video data, for example, may be presented in a manner similar to that described in relation to step 408.
Although described in relation to a series of particular steps, in some implementations, one or more of the steps of the method 400 may be performed in a different order, or one or more steps may be performed in parallel. In some implementations, one or more of the steps of the method 400 may be removed or modified, or one or more steps may be added to the method 400, while staying within the scope and intent of the method 400.
As shown in FIG. 5, an implementation of an exemplary cloud computing environment 500 for visually augmenting a graphical rendering of either a chemical structure representation or a biological sequence representation with multi-dimensional information is shown and described. The cloud computing environment 500 may include one or more resource providers 502 a, 502 b, 502 c (collectively, 502). Each resource provider 502 may include computing resources. In some implementations, computing resources may include any hardware and/or software used to process data. For example, computing resources may include hardware and/or software capable of executing algorithms, computer programs, and/or computer applications. In some implementations, exemplary computing resources may include application servers and/or databases with storage and retrieval capabilities. Each resource provider 502 may be connected to any other resource provider 502 in the cloud computing environment 500. In some implementations, the resource providers 502 may be connected over a computer network 508. Each resource provider 502 may be connected to one or more computing device 504 a, 504 b, 504 c (collectively, 504), over the computer network 508.
The cloud computing environment 500 may include a resource manager 506. The resource manager 506 may be connected to the resource providers 502 and the computing devices 504 over the computer network 508. In some implementations, the resource manager 506 may facilitate the provision of computing resources by one or more resource providers 502 to one or more computing devices 504. The resource manager 506 may receive a request for a computing resource from a particular computing device 504. The resource manager 506 may identify one or more resource providers 502 capable of providing the computing resource requested by the computing device 504. The resource manager 506 may select a resource provider 502 to provide the computing resource. The resource manager 506 may facilitate a connection between the resource provider 502 and a particular computing device 504. In some implementations, the resource manager 506 may establish a connection between a particular resource provider 502 and a particular computing device 504. In some implementations, the resource manager 506 may redirect a particular computing device 504 to a particular resource provider 502 with the requested computing resource.
FIG. 6 shows an example of a computing device 600 and a mobile computing device 650 that can be used to implement the techniques described in this disclosure. The computing device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device 650 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting.
The computing device 600 includes a processor 602, a memory 604, a storage device 606, a high-speed interface 608 connecting to the memory 604 and multiple high-speed expansion ports 610, and a low-speed interface 612 connecting to a low-speed expansion port 614 and the storage device 606. Each of the processor 602, the memory 604, the storage device 606, the high-speed interface 608, the high-speed expansion ports 610, and the low-speed interface 612, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 602 can process instructions for execution within the computing device 600, including instructions stored in the memory 604 or on the storage device 606 to display graphical information for a GUI on an external input/output device, such as a display 616 coupled to the high-speed interface 608. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 604 stores information within the computing device 600. In some implementations, the memory 604 is a volatile memory unit or units. In some implementations, the memory 604 is a non-volatile memory unit or units. The memory 604 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 606 is capable of providing mass storage for the computing device 600. In some implementations, the storage device 606 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices (for example, processor 602), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine-readable mediums (for example, the memory 604, the storage device 606, or memory on the processor 602).
The high-speed interface 608 manages bandwidth-intensive operations for the computing device 600, while the low-speed interface 612 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high-speed interface 608 is coupled to the memory 604, the display 616 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 610, which may accept various expansion cards (not shown). In the implementation, the low-speed interface 612 is coupled to the storage device 606 and the low-speed expansion port 614. The low-speed expansion port 614, which may include various communication ports (e.g., USB, Bluetooth®, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 600 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 620, or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer 622. It may also be implemented as part of a rack server system 624. Alternatively, components from the computing device 600 may be combined with other components in a mobile device (not shown), such as a mobile computing device 650. Each of such devices may contain one or more of the computing device 600 and the mobile computing device 650, and an entire system may be made up of multiple computing devices communicating with each other.
The mobile computing device 650 includes a processor 652, a memory 664, an input/output device such as a display 654, a communication interface 666, and a transceiver 668, among other components. The mobile computing device 650 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 652, the memory 664, the display 654, the communication interface 666, and the transceiver 668, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 652 can execute instructions within the mobile computing device 650, including instructions stored in the memory 664. The processor 652 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 652 may provide, for example, for coordination of the other components of the mobile computing device 650, such as control of user interfaces, applications run by the mobile computing device 650, and wireless communication by the mobile computing device 650.
The processor 652 may communicate with a user through a control interface 658 and a display interface 656 coupled to the display 654. The display 654 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 656 may comprise appropriate circuitry for driving the display 654 to present graphical and other information to a user. The control interface 658 may receive commands from a user and convert them for submission to the processor 652. In addition, an external interface 662 may provide communication with the processor 652, so as to enable near area communication of the mobile computing device 650 with other devices. The external interface 662 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
The memory 664 stores information within the mobile computing device 650. The memory 664 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 674 may also be provided and connected to the mobile computing device 650 through an expansion interface 672, which may include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory 674 may provide extra storage space for the mobile computing device 650, or may also store applications or other information for the mobile computing device 650. Specifically, the expansion memory 674 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, the expansion memory 674 may be provide as a security module for the mobile computing device 650, and may be programmed with instructions that permit secure use of the mobile computing device 650. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
The memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier, that the instructions, when executed by one or more processing devices (for example, processor 652), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory 664, the expansion memory 674, or memory on the processor 652). In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver 668 or the external interface 662.
The mobile computing device 650 may communicate wirelessly through the communication interface 666, which may include digital signal processing circuitry where necessary. The communication interface 666 may provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), among others. Such communication may occur, for example, through the transceiver 668 using a radio-frequency. In addition, short-range communication may occur, such as using a Bluetooth®, Wi-Fi™, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 670 may provide additional navigation- and location-related wireless data to the mobile computing device 650, which may be used as appropriate by applications running on the mobile computing device 650.
The mobile computing device 650 may also communicate audibly using an audio codec 660, which may receive spoken information from a user and convert it to usable digital information. The audio codec 660 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 650. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 650.
The mobile computing device 650 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 680. It may also be implemented as part of a smart-phone 682, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should also be noted that embodiments of the present disclosure may be provided as one or more computer-readable programs embodied on or in one or more articles of manufacture. The article of manufacture may be any suitable hardware apparatus, such as, for example, a floppy disk, a hard disk, a CD ROM, a CD-RW, a CD-R, a DVD ROM, a DVD-RW, a DVD-R, a flash memory card, a PROM, a RAM, a ROM, or a magnetic tape. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that may be used include C, C++, or Java. The software programs may be further translated into machine language or virtual machine instructions and stored in a program file in that form. The program file may then be stored on or in one or more of the articles of manufacture.
Certain embodiments of the present invention were described above. It is, however, expressly noted that the present invention is not limited to those embodiments, but rather the intention is that additions and modifications to what was expressly described herein are also included within the scope of the invention. Moreover, it is to be understood that the features of the various embodiments described herein were not mutually exclusive and can exist in various combinations and permutations, even if such combinations or permutations were not made express herein, without departing from the spirit and scope of the invention. In fact, variations, modifications, and other implementations of what was described herein will occur to those of ordinary skill in the art without departing from the spirit and the scope of the invention. As such, the invention is not to be defined only by the preceding illustrative description.
In view of the structure, functions and apparatus of the systems and methods described here, in some implementations, systems and methods for visually augmenting a graphical rendering of a chemical structure representation with multi-dimensional information are provided. Having described certain implementations of visual augmentation of a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information, it will now become apparent to one of skill in the art that other implementations incorporating the concepts of the disclosure may be used. Therefore, the disclosure should not be limited to certain implementations, but rather should be limited only by the spirit and scope of the following claims.

Claims (21)

The invention claimed is:
1. A method comprising:
receiving video data captured by a first computing device, wherein the video data comprises a sequence of images comprising a first video frame;
identifying, within the first video frame, by a processor of a second computing device, a graphical representation of one of a chemical structure and a biological sequence;
matching, by the processor, additional information to the graphical representation, wherein the additional information comprises at least one of
(i) a three-dimensional representation of the chemical structure or biological sequence,
(ii) a name of
a) a chemical compound represented by the chemical structure, or
b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and
(iii) a plurality of properties of the chemical compound or biological species;
rendering, by the processor, augmentation data corresponding to the additional information, wherein the augmentation data comprises one or both of (i) and (ii):
(i) graphics corresponding to at least a portion of the additional information, and
(ii) one or more graphical controls configured, upon selection, to present at least a portion of the additional information; and
merging, by the processor, the video data with the rendered augmentation data to positionally link the augmentation data to the graphical representation of the chemical structure or the biological sequence identified within the first video frame and providing the video data for presentation upon a display controlled by the first computing device, wherein the video data is presented by the first computing device in substantially real time in relation to the capture of the video data by the first computing device.
2. The method of claim 1, wherein the first computing device comprises the second computing device.
3. The method of claim 1, wherein the graphical representation of the chemical structure comprises a two-dimensional drawing of the chemical structure.
4. The method of claim 1, wherein merging the video data with the rendered augmentation data comprises:
determining an orientation of the first computing device; and
rendering the three-dimensional representation based in part upon the orientation of the first computing device.
5. The method of claim 1, wherein the additional information further comprises (iv) a storage location.
6. The method of claim 5, wherein matching additional information to the graphical representation comprises determining the storage location, wherein determining the storage location comprises identifying the chemical compound or biological species within a catalog.
7. The method of claim 1, wherein the rendering and the merging comprises rendering a first graphical control of the one or more graphical controls as a semi-transparent image and overlaying the semi-transparent image upon the first video frame.
8. The method of claim 1, wherein merging the video data with the rendered augmentation data comprises superimposing the graphical representation with at least a portion of the additional information.
9. The method of claim 1, wherein matching the additional information to the graphical representation comprises comparing the graphical representation to one or more stored graphical representations.
10. The method of claim 9, wherein matching the additional information to the graphical representation comprises digitally refining the portion of the graphical representation prior to comparing the graphical representation to the one or more stored graphical representations.
11. The method of claim 1, further comprising:
receiving, by the processor, an indication of selection of a first graphical control of the one or more graphical controls;
rendering, by the processor, augmentation data corresponding to the additional information related to the first graphical control; and
merging, by the processor, subsequent video data with the portion of the rendered augmentation data related to the first graphical control.
12. The method of claim 11, wherein merging the subsequent video data comprises adding audio data to the video data, wherein the audio data comprises a verbal description of the additional information.
13. The method of claim 1, wherein merging the video data with the rendered augmentation data comprises positioning the augmentation data in proximity to the graphical representation of the chemical structure or the biological sequence identified within the first video frame.
14. A system comprising:
a processor; and
a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to:
receive video data captured by a computing device, wherein the video data comprises a sequence of images comprising a first video frame;
identify, within the first video frame, a visual identification of a chemical structure or a biological sequence;
match additional information to the visual identification, wherein the additional information comprises at least one of
(i) a graphical representation of the chemical structure or biological sequence,
(ii) a name of
a) a chemical compound represented by the chemical structure, or
b) a biological species comprising a DNA molecule, an RNA molecule, or a polypeptide represented by the biological sequence, and
(iii) a plurality of properties of the chemical compound or biological species;
render augmentation data corresponding to the additional information, wherein the augmentation data comprises one or both of (i) and (ii):
(i) graphics corresponding to at least a portion of the additional information, and
(ii) one or more graphical controls configured, upon selection, to present at least a portion of the additional information; and
merge the video data with the rendered augmentation data to positionally link the augmentation data to the visual representation of the chemical structure or the biological sequence identified within the first video frame, wherein
merging the video data with the rendered augmentation data comprises positionally linking the at least one of (i) the portion of the additional information and (ii) the one or more graphical controls to the visual identification such that, upon movement of the visual identification within the first video frame, the at least one of (i) the portion of the additional information and (ii) the one or more graphical controls undergo a corresponding motion; and
provide the video data for presentation upon a display;
wherein the video data is presented in substantially real time in relation to the capture of the video data by the computing device.
15. The system of claim 14, wherein the computing device comprises the processor.
16. The system of claim 14, wherein the visual identification comprises a two-dimensional drawing of a chemical structure or biological sequence.
17. The system of claim 14, wherein the instructions, when executed, cause the processor to:
receive, from a user of the computing device, an indication of video freeze; and
responsive to the indication of video freeze, provide for presentation a still image comprising a most recent augmented video data frame.
18. The system of claim 17, wherein the instructions, when executed, cause the processor to:
receive, after providing the still image, selection of a first graphical control of the one or more graphical controls;
apply, responsive to selection of the first graphical control, the portion of the additional information related to the first graphical control to the still image to determine a second still image; and
provide, for presentation upon the display, the second still image.
19. A non-transitory computer readable medium having instructions stored thereon, wherein the instructions, when executed by a processor, cause the processor to:
receive video data destined for display upon a computing device, wherein the video data comprises a sequence of images comprising a first video frame;
identify, within the first video frame, a graphical representation of one of a chemical structure and a biological sequence;
match additional information to the graphical representation, wherein the additional information is arranged in a plurality of feature categories;
render augmentation data corresponding to the additional information, wherein the augmentation data comprises one or more graphical controls configured, upon selection, to present at least a portion of the additional information belonging to a first category of the plurality of feature categories;
merge the video data with the one or more graphical controls to positionally link the augmentation data to the graphical representation of the chemical structure or the biological sequence identified within the first video frame;
after augmenting the video data, provide the video data for presentation upon a display controlled by the computing device;
receive, responsive to selection of a first graphical control of the one or more graphical controls by a user of the computing device, an indication corresponding to the first category; and
responsive to the indication, augment subsequent video data with at least a portion of the additional information belonging to the first category; and
after augmenting the subsequent video data, provide the subsequent video data for presentation upon the display controlled by the computing device.
20. The computer readable medium of claim 19, wherein the computing device comprises the processor.
21. The computer readable medium of claim 19, wherein:
the video data was captured by the computing device; and
the video data is presented by the computing device in substantially real time in relation to the capture of the video data by the computing device.
US14/058,865 2013-03-13 2013-10-21 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information Active US8854361B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US14/058,865 US8854361B1 (en) 2013-03-13 2013-10-21 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
PCT/US2014/016249 WO2014163777A1 (en) 2013-03-13 2014-02-13 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
EP14708174.9A EP2973083B1 (en) 2013-03-13 2014-02-13 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
US14/471,456 US11164660B2 (en) 2013-03-13 2014-08-28 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361779607P 2013-03-13 2013-03-13
US14/058,865 US8854361B1 (en) 2013-03-13 2013-10-21 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/471,456 Continuation US11164660B2 (en) 2013-03-13 2014-08-28 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Publications (2)

Publication Number Publication Date
US20140267240A1 US20140267240A1 (en) 2014-09-18
US8854361B1 true US8854361B1 (en) 2014-10-07

Family

ID=51525374

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/058,865 Active US8854361B1 (en) 2013-03-13 2013-10-21 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
US14/471,456 Active US11164660B2 (en) 2013-03-13 2014-08-28 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/471,456 Active US11164660B2 (en) 2013-03-13 2014-08-28 Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Country Status (3)

Country Link
US (2) US8854361B1 (en)
EP (1) EP2973083B1 (en)
WO (1) WO2014163777A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137207A1 (en) * 2010-11-29 2012-05-31 Heinz Christopher J Systems and methods for converting a pdf file
US9751294B2 (en) 2013-05-09 2017-09-05 Perkinelmer Informatics, Inc. Systems and methods for translating three dimensional graphic molecular models to computer aided design format
US10572545B2 (en) 2017-03-03 2020-02-25 Perkinelmer Informatics, Inc Systems and methods for searching and indexing documents comprising chemical information
US11164660B2 (en) 2013-03-13 2021-11-02 Perkinelmer Informatics, Inc. Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8433723B2 (en) 2010-05-03 2013-04-30 Cambridgesoft Corporation Systems, methods, and apparatus for processing documents to identify structures
US9977876B2 (en) 2012-02-24 2018-05-22 Perkinelmer Informatics, Inc. Systems, methods, and apparatus for drawing chemical structures using touch and gestures
US9535583B2 (en) 2012-12-13 2017-01-03 Perkinelmer Informatics, Inc. Draw-ahead feature for chemical structure drawing applications
WO2014163749A1 (en) 2013-03-13 2014-10-09 Cambridgesoft Corporation Systems and methods for gesture-based sharing of data between separate electronic devices
US10126927B1 (en) * 2013-03-15 2018-11-13 Study Social, Inc. Collaborative, social online education and whiteboard techniques
US9430127B2 (en) 2013-05-08 2016-08-30 Cambridgesoft Corporation Systems and methods for providing feedback cues for touch screen interface interaction with chemical and biological structure drawing applications
JP6618068B2 (en) * 2015-02-02 2019-12-11 株式会社京都廣川リサーチアンドコンサルティング Substance learning support device, server device, substance learning support method, and program
GB201504170D0 (en) * 2015-03-12 2015-04-29 Mel Science Ltd MEL Science
US20170140669A1 (en) * 2015-11-13 2017-05-18 International Business Machines Corporation Virtual lab for hands-on learning using tangible user interactions
US10796484B2 (en) * 2017-06-14 2020-10-06 Anand Babu Chitavadigi System and method for interactive multimedia and multi-lingual guided tour/panorama tour
CN109215402B (en) * 2018-10-18 2020-09-15 广州嘉影软件有限公司 On-book chemical experiment method and system based on AR
CN112395846B (en) * 2020-03-17 2021-07-06 北京三维天地科技股份有限公司 Electronic experiment record report generation system capable of configuring template
US11600050B2 (en) * 2021-04-02 2023-03-07 Streem, Llc Determining 6D pose estimates for augmented reality (AR) sessions
KR20230017935A (en) * 2021-07-28 2023-02-07 노우린 하케 리디 Chemistry education device using artificial intelligence and augmented reality and method of providing chemistry education service using the same

Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020161599A1 (en) 2000-09-05 2002-10-31 Faerman Carlos H. Electronic database of enzyme substrate and enzyme inhibitor structures
US20040003000A1 (en) 2001-01-29 2004-01-01 Smith Robin Young Systems, methods and computer program products for determining parameters for chemical synthesis
US20040006742A1 (en) 2002-05-20 2004-01-08 Slocombe David N. Document structure identifier
US20040236740A1 (en) 2000-12-08 2004-11-25 Ingenuity Systems, Inc. Method and system for performing information extraction and quality control for a knowledgebase
EP1526471A1 (en) 2003-10-24 2005-04-27 Microsoft Corporation System and method for file sharing in peer-to-peer group shared spaces
US20050102313A1 (en) 2003-04-08 2005-05-12 Newriver, Inc. System for locating data elements within originating data sources
US20050226495A1 (en) 2004-04-02 2005-10-13 Li Jonathan Q System and method for processing training data for a statistical application
US20060123113A1 (en) 2004-12-03 2006-06-08 Bellsouth Intellectual Property Corporation System, method, apparatus, and product for resource sharing
US20060277201A1 (en) 2001-01-05 2006-12-07 Symyx Technologies, Inc. Laboratory database system and method for combinatorial materials research
US20070016853A1 (en) * 2005-07-14 2007-01-18 Molsoft, Llc Structured documents and systems, methods and computer programs for creating, producing and displaying three dimensional objects and other related information in those structured documents
US20070174765A1 (en) * 2003-11-18 2007-07-26 Gh, Llc Content communication system and methods
WO2007092842A2 (en) 2006-02-06 2007-08-16 Abacalab, Inc. An apparatus and method for mobile graphical cheminformatic
US20070260583A1 (en) 2004-03-05 2007-11-08 Applied Research Systems Ars Holding N.V. Method for fast substructure searching in non-enumerated chemical libraries
US20080140616A1 (en) 2005-09-21 2008-06-12 Nicolas Encina Document processing
US20080228774A1 (en) 2007-03-15 2008-09-18 Accenture Global Services Gmbh Collaboration system
US20090006411A1 (en) 2007-03-27 2009-01-01 Slc Consultants, Inc. Strategic Business Management System
US20090171975A1 (en) 2007-03-06 2009-07-02 Mcconnell Robert S Method and system for tracking carbon credits and other carbon valuation units
US20090273571A1 (en) 2008-05-01 2009-11-05 Alan Bowens Gesture Recognition
US7650327B2 (en) 2002-03-01 2010-01-19 Marine Biological Laboratory Managing taxonomic information
US7805437B1 (en) 2002-05-15 2010-09-28 Spotfire Ab Interactive SAR table
US20100257457A1 (en) 2009-04-07 2010-10-07 De Goes John A Real-time content collaboration
US20110163944A1 (en) 2010-01-05 2011-07-07 Apple Inc. Intuitive, gesture-based communications with physics metaphors
US20110221656A1 (en) 2010-02-28 2011-09-15 Osterhout Group, Inc. Displayed content vision correction with electrically adjustable lens
US20110276589A1 (en) 2010-05-03 2011-11-10 Smith Robin Y Systems, methods, and apparatus for processing documents to identify structures
US20120019488A1 (en) * 2009-12-14 2012-01-26 Mccarthy John P Stylus for a touchscreen display
US20120078853A1 (en) 2010-09-21 2012-03-29 Cambridgesoft Corporation Systems, methods, and apparatus for facilitating chemical analyses
US20120110486A1 (en) 2010-10-01 2012-05-03 Imerj LLC Universal clipboard
US20120173622A1 (en) 2011-01-04 2012-07-05 Samsung Electronics Co., Ltd. Social screen casting
US20120188147A1 (en) 2010-12-31 2012-07-26 Ebay, Inc. Methods and systems for displaying content on multiple networked devices with a simple command
US20120246228A1 (en) 2011-03-21 2012-09-27 Microsoft Corporation Sharing channels
US20120284638A1 (en) 2011-05-06 2012-11-08 Kibits Corp. System and method for social interaction, sharing and collaboration
US20120311038A1 (en) 2011-06-06 2012-12-06 Trinh Trung Tim Proximity Session Mobility Extension
US20120324368A1 (en) 2011-06-14 2012-12-20 Logmein, Inc. Object transfer method using gesture-based computing device
GB2493830A (en) 2011-08-17 2013-02-20 Integrated Chemistry Design Inc Using finger gestures to create and modify a chemical structure on a touch screen device
US20130044042A1 (en) 2011-08-18 2013-02-21 Google Inc. Wearable device with input and output structures
WO2013126077A1 (en) 2012-02-24 2013-08-29 Cambridgesoft Corporation Systems, methods, and apparatus for drawing chemical structures using touch and gestures
US20130222265A1 (en) 2012-02-24 2013-08-29 Robin Young Smith Systems, Methods, and Apparatus for Drawing Chemical Structures Using Touch and Gestures

Family Cites Families (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5249045A (en) * 1954-12-24 1993-09-28 Lemelson Jerome H Apparatus and methods for automated observation of three-dimensional objects
US4967372A (en) 1986-05-16 1990-10-30 The United States Of America As Represented By The Department Of Health And Human Services Automatic orientation and interactive addressing of display
US5025388A (en) * 1988-08-26 1991-06-18 Cramer Richard D Iii Comparative molecular field analysis (CoMFA)
US5008831A (en) 1989-01-12 1991-04-16 The United States Of America As Represented By The Department Of Health And Human Services Method for producing high quality chemical structure diagrams
US5249137A (en) 1990-03-23 1993-09-28 Xerox Corporation Computer-aided chemical illustration system
US5345516A (en) * 1991-04-19 1994-09-06 International Business Machines Corporation Apparatus and method for parsing a chemical string
US5434971A (en) 1991-06-28 1995-07-18 Digital Equipment Corp. System for constructing a table data structure based on an associated configuration data structure and loading it with chemical sample physical data
US5386507A (en) 1991-07-18 1995-01-31 Teig; Steven L. Computer graphics system for selectively modelling molecules and investigating the chemical and physical properties thereof
US5394455A (en) * 1993-04-30 1995-02-28 The Regents Of The University Of California Digitally aided microfluoroscopy and fluorospot system and method of using the same
US5461580A (en) 1993-04-30 1995-10-24 Xerox Corporation Computer-aided chemical illustration system
US5577239A (en) 1994-08-10 1996-11-19 Moore; Jeffrey Chemical structure storage, searching and retrieval system
US5517237A (en) * 1994-08-22 1996-05-14 The United States Of America As Represented By The Secretary Of The Interior Video photometric color system for processing color specific streams
US5463564A (en) 1994-09-16 1995-10-31 3-Dimensional Pharmaceuticals, Inc. System and method of automatically generating chemical compounds with desired properties
US5902732A (en) * 1995-10-04 1999-05-11 Cytoscan Sciences Llc Drug screening process measuring changes in cell volume
US6078681A (en) * 1996-03-18 2000-06-20 Marine Biological Laboratory Analytical imaging system and process
US5920317A (en) * 1996-06-11 1999-07-06 Vmi Technologies Incorporated System and method for storing and displaying ultrasound images
US6017390A (en) 1996-07-24 2000-01-25 The Regents Of The University Of California Growth of oriented crystals at polymerized membranes
US6006241A (en) * 1997-03-14 1999-12-21 Microsoft Corporation Production of a video stream with synchronized annotations over a computer network
US6449653B2 (en) * 1997-03-25 2002-09-10 Microsoft Corporation Interleaved multiple multimedia stream for synchronized transmission over a computer network
US5993001A (en) * 1997-06-05 1999-11-30 Joslin Diabetes Center, Inc. Stereoscopic imaging system for retinal examination with remote examination unit
US7840912B2 (en) 2006-01-30 2010-11-23 Apple Inc. Multi-touch gesture dictionary
US9292111B2 (en) 1998-01-26 2016-03-22 Apple Inc. Gesturing with a multipoint sensing device
US7663607B2 (en) 2004-05-06 2010-02-16 Apple Inc. Multipoint touchscreen
WO1999040117A1 (en) 1998-02-06 1999-08-12 Ilexus Pty. Limited THREE-DIMENSIONAL STRUCTURES AND MODELS OF Fc RECEPTORS AND USES THEREOF
US6154771A (en) * 1998-06-01 2000-11-28 Mediastra, Inc. Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively
US7912689B1 (en) 1999-02-11 2011-03-22 Cambridgesoft Corporation Enhancing structure diagram generation through use of symmetry
US6582233B1 (en) 1999-05-14 2003-06-24 Accelrys Inc. Apparatus and method for monitoring the validity of a molecular model
US6973616B1 (en) * 1999-06-24 2005-12-06 Microsoft Corporation Associating annotations with a content source
CA2368098A1 (en) 1999-06-25 2001-01-04 Genset S.A. Complementary dna's encoding proteins with signal peptides
EP1067466A2 (en) * 1999-07-09 2001-01-10 Smithkline Beecham Genome browser interface
AU2001234779A1 (en) 2000-02-03 2001-08-14 Nanoscale Combinatorial Synthesis, Inc. Structure identification methods using mass measurements
US20020049548A1 (en) 2000-04-03 2002-04-25 Libraria, Inc. Chemistry resource database
US20040024493A1 (en) 2000-05-08 2004-02-05 Magnus Fagrell Method, system, and sub-system, for processing a chemical reaction
US8086672B2 (en) 2000-06-17 2011-12-27 Microsoft Corporation When-free messaging
US7043415B1 (en) * 2001-01-31 2006-05-09 Pharsight Corporation Interactive graphical environment for drug model generation
US7030861B1 (en) 2001-02-10 2006-04-18 Wayne Carl Westerman System and method for packing multi-touch gestures onto a hand
GB0106441D0 (en) 2001-03-15 2001-05-02 Bayer Ag Method for generating a hierarchical topological tree of 2D or 3D-structural formulas of chemical compounds for property optimization of chemical compounds
US20040101876A1 (en) * 2002-05-31 2004-05-27 Liat Mintz Methods and systems for annotating biomolecular sequences
US8865347B2 (en) 2001-09-28 2014-10-21 Siluria Technologies, Inc. Digital alloys and methods for forming the same
US20050010603A1 (en) 2001-10-31 2005-01-13 Berks Andrew H. Display for Markush chemical structures
US20040171062A1 (en) 2002-02-28 2004-09-02 Plexxikon, Inc. Methods for the design of molecular scaffolds and ligands
IL163921A0 (en) 2002-03-22 2005-12-18 Morphochem Ag Komb Chemie A method for the identification and synthesis of chemical compounds
WO2004057439A2 (en) 2002-05-31 2004-07-08 University Of Utah Research Foundation System and method for visual annotation and knowledge representation
US20040068758A1 (en) * 2002-10-02 2004-04-08 Mike Daily Dynamic video annotation
JP3685404B2 (en) 2002-11-07 2005-08-17 インターナショナル・ビジネス・マシーンズ・コーポレーション MOLECULAR INFORMATION PROVIDING SYSTEM, MOLECULAR INFORMATION PROVIDING DEVICE, MOLECULAR INFORMATION PROVIDING METHOD, METHOD FOR CONTROLLING INFORMATION PROCESSING DEVICE AS MOLECULAR INFORMATION PROVIDING DEVICE, PROGRAM FOR EXECUTING THE INFORMATION PROCESSING DEVICE, AND MACHINE RECORDING THE PROGRAM Readable storage medium and grid computing support device for molecular orbital calculation
US20040122641A1 (en) 2002-12-20 2004-06-24 Lab2Plant, Inc. (An Indiana Corporation) System and method for chemical process scale-up and preliminary design and analysis
US7904797B2 (en) * 2003-01-21 2011-03-08 Microsoft Corporation Rapid media group annotation
US7334186B2 (en) * 2003-04-30 2008-02-19 Hewlett-Packard Development Company, L.P. System and method for creation of video annotations
US20040249791A1 (en) 2003-06-03 2004-12-09 Waters Michael D. Method and system for developing and querying a sequence driven contextual knowledge base
US9396304B2 (en) * 2003-07-10 2016-07-19 Wisconsin Alumni Research Foundation Computer systems for annotation of single molecule fragments
DE602004023165D1 (en) 2003-10-15 2009-10-29 Canon Kk Selective preview and proofing of documents or layouts containing variable data
US20050123993A1 (en) 2003-12-09 2005-06-09 Stephan Brunner Methods of determining ligand residue binding affinity
US20050131894A1 (en) 2003-12-11 2005-06-16 Vuong Chau M. System and method for providing identification and search information
AU2005206388A1 (en) * 2004-01-27 2005-08-04 Compugen Ltd. Methods and systems for annotating biomolecular sequences
US7899827B2 (en) 2004-03-09 2011-03-01 International Business Machines Corporation System and method for the indexing of organic chemical structures mined from text documents
US7176887B2 (en) 2004-03-23 2007-02-13 Fujitsu Limited Environmental modeling for motion controlled handheld devices
US7394946B2 (en) * 2004-05-18 2008-07-01 Agfa Healthcare Method for automatically mapping of geometric objects in digital medical images
EP1628234A1 (en) 2004-06-07 2006-02-22 Universita' Degli Studi di Milano-Bicocca Method of construction and selection of virtual libraries in combinatorial chemistry
US20070276636A1 (en) 2004-06-26 2007-11-29 Wythoff Barry J System for visualization and analysis of numerical and chemical information
US8443279B1 (en) * 2004-10-13 2013-05-14 Stryker Corporation Voice-responsive annotation of video generated by an endoscopic camera
US20060241864A1 (en) 2005-04-22 2006-10-26 Outland Research, Llc Method and apparatus for point-and-send data transfer within an ubiquitous computing environment
US7460730B2 (en) * 2005-08-04 2008-12-02 Microsoft Corporation Video registration and image sequence stitching
US7613574B2 (en) 2005-10-28 2009-11-03 S-Matrix System and method for automating scientific and engineering experimentation for deriving surrogate response data
GB0603923D0 (en) * 2006-02-28 2006-04-05 Perkinelmer Ltd Apparatus and methods for imaging and modification of biological samples
US8543337B2 (en) * 2006-04-21 2013-09-24 The Mathworks, Inc. Block diagram explorer in a method and apparatus for integrated modeling, simulation and analysis of chemical and biological systems
US8140267B2 (en) * 2006-06-30 2012-03-20 International Business Machines Corporation System and method for identifying similar molecules
US7692629B2 (en) 2006-12-07 2010-04-06 Microsoft Corporation Operating touch screen interfaces
US7924271B2 (en) 2007-01-05 2011-04-12 Apple Inc. Detecting gestures on multi-event sensitive devices
US8468001B2 (en) * 2007-03-22 2013-06-18 Infosys Limited Ligand identification and matching software tools
US8681104B2 (en) 2007-06-13 2014-03-25 Apple Inc. Pinch-throw and translation gestures
GB2453318A (en) 2007-09-03 2009-04-08 Idan Zuta Method and system for authoring a patent application
US7853558B2 (en) * 2007-11-09 2010-12-14 Vibrant Media, Inc. Intelligent augmentation of media content
US8077157B2 (en) 2008-03-31 2011-12-13 Intel Corporation Device, system, and method of wireless transfer of files
US8296670B2 (en) 2008-05-19 2012-10-23 Microsoft Corporation Accessing a menu utilizing a drag-operation
US8139650B2 (en) * 2008-08-07 2012-03-20 California Institute Of Technology Fast noise reduction in digital images and video
US8427424B2 (en) 2008-09-30 2013-04-23 Microsoft Corporation Using physical objects in conjunction with an interactive surface
KR101544475B1 (en) 2008-11-28 2015-08-13 엘지전자 주식회사 Controlling of Input/Output through touch
US8547342B2 (en) 2008-12-22 2013-10-01 Verizon Patent And Licensing Inc. Gesture-based delivery from mobile device
EP2224389A1 (en) 2009-02-27 2010-09-01 Research In Motion Limited System and method for improved address entry
US8380866B2 (en) * 2009-03-20 2013-02-19 Ricoh Company, Ltd. Techniques for facilitating annotations
US8499239B2 (en) 2009-08-28 2013-07-30 Microsoft Corporation Globe container
US8407578B2 (en) * 2009-09-23 2013-03-26 International Business Machines Corporation Chemical web browser
US8312392B2 (en) 2009-10-02 2012-11-13 Qualcomm Incorporated User interface gestures and methods for providing file sharing functionality
EP2493387A4 (en) * 2009-10-30 2017-07-19 The Johns Hopkins University Visual tracking and annotation of clinically important anatomical landmarks for surgical interventions
US8400548B2 (en) * 2010-01-05 2013-03-19 Apple Inc. Synchronized, interactive augmented reality displays for multifunction devices
US20120154440A1 (en) 2010-11-11 2012-06-21 Openeye Scientific Software, Inc. Augmented 2d representation of molecular structures
US10303357B2 (en) 2010-11-19 2019-05-28 TIVO SOLUTIONS lNC. Flick to send or display content
US9959644B2 (en) * 2010-12-21 2018-05-01 Qualcomm Incorporated Computerized method and device for annotating at least one feature of an image of a view
TW201235928A (en) 2011-02-22 2012-09-01 Acer Inc Handheld devices, electronic devices, and data transmission methods and computer program products thereof
US9558403B2 (en) * 2011-08-26 2017-01-31 Council Of Scientific And Industrial Research Chemical structure recognition tool
CA2866774C (en) * 2012-03-21 2021-11-23 Zymeworks Inc. Systems and methods for making two dimensional graphs of complex molecules
US9619587B2 (en) 2012-04-09 2017-04-11 Autodesk, Inc. Decomposition of 3D geometry into developable surface patches and 2D cut patterns
AU2013262476A1 (en) 2012-05-18 2014-12-04 Georgetown University Populating and searching a drug informatics database
US9147221B2 (en) * 2012-05-23 2015-09-29 Qualcomm Incorporated Image-driven view management for annotations
US9421716B2 (en) 2012-08-08 2016-08-23 Makerbot Industries, Llc Photo booth for three-dimensional images
WO2014026152A2 (en) * 2012-08-10 2014-02-13 Assurerx Health, Inc. Systems and methods for pharmacogenomic decision support in psychiatry
US20140089328A1 (en) 2012-09-27 2014-03-27 International Business Machines Corporation Association of data to a biological sequence
US20140173476A1 (en) 2012-12-13 2014-06-19 Cambridgesoft Corporation Draw-ahead feature for biological sequence drawing applications
US9535583B2 (en) 2012-12-13 2017-01-03 Perkinelmer Informatics, Inc. Draw-ahead feature for chemical structure drawing applications
US9380275B2 (en) * 2013-01-30 2016-06-28 Insitu, Inc. Augmented video system providing enhanced situational awareness
US8854361B1 (en) 2013-03-13 2014-10-07 Cambridgesoft Corporation Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
WO2014163749A1 (en) 2013-03-13 2014-10-09 Cambridgesoft Corporation Systems and methods for gesture-based sharing of data between separate electronic devices
US9430127B2 (en) 2013-05-08 2016-08-30 Cambridgesoft Corporation Systems and methods for providing feedback cues for touch screen interface interaction with chemical and biological structure drawing applications
US9751294B2 (en) 2013-05-09 2017-09-05 Perkinelmer Informatics, Inc. Systems and methods for translating three dimensional graphic molecular models to computer aided design format
US9582864B2 (en) 2014-01-10 2017-02-28 Perkinelmer Cellular Technologies Germany Gmbh Method and system for image correction using a quasiperiodic grid

Patent Citations (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020161599A1 (en) 2000-09-05 2002-10-31 Faerman Carlos H. Electronic database of enzyme substrate and enzyme inhibitor structures
US20040236740A1 (en) 2000-12-08 2004-11-25 Ingenuity Systems, Inc. Method and system for performing information extraction and quality control for a knowledgebase
US20060277201A1 (en) 2001-01-05 2006-12-07 Symyx Technologies, Inc. Laboratory database system and method for combinatorial materials research
US7676499B2 (en) 2001-01-05 2010-03-09 Symyx Solutions, Inc. Management of data from combinatorial materials experiments
US20040003000A1 (en) 2001-01-29 2004-01-01 Smith Robin Young Systems, methods and computer program products for determining parameters for chemical synthesis
US7250950B2 (en) 2001-01-29 2007-07-31 Symyx Technologies, Inc. Systems, methods and computer program products for determining parameters for chemical synthesis
US7650327B2 (en) 2002-03-01 2010-01-19 Marine Biological Laboratory Managing taxonomic information
US7805437B1 (en) 2002-05-15 2010-09-28 Spotfire Ab Interactive SAR table
US20040006742A1 (en) 2002-05-20 2004-01-08 Slocombe David N. Document structure identifier
US20050102313A1 (en) 2003-04-08 2005-05-12 Newriver, Inc. System for locating data elements within originating data sources
EP1526471A1 (en) 2003-10-24 2005-04-27 Microsoft Corporation System and method for file sharing in peer-to-peer group shared spaces
US20070174765A1 (en) * 2003-11-18 2007-07-26 Gh, Llc Content communication system and methods
US20070260583A1 (en) 2004-03-05 2007-11-08 Applied Research Systems Ars Holding N.V. Method for fast substructure searching in non-enumerated chemical libraries
US20050226495A1 (en) 2004-04-02 2005-10-13 Li Jonathan Q System and method for processing training data for a statistical application
US20060123113A1 (en) 2004-12-03 2006-06-08 Bellsouth Intellectual Property Corporation System, method, apparatus, and product for resource sharing
US20070016853A1 (en) * 2005-07-14 2007-01-18 Molsoft, Llc Structured documents and systems, methods and computer programs for creating, producing and displaying three dimensional objects and other related information in those structured documents
US20080140616A1 (en) 2005-09-21 2008-06-12 Nicolas Encina Document processing
US7707206B2 (en) 2005-09-21 2010-04-27 Praxeon, Inc. Document processing
WO2007092842A2 (en) 2006-02-06 2007-08-16 Abacalab, Inc. An apparatus and method for mobile graphical cheminformatic
US20090171975A1 (en) 2007-03-06 2009-07-02 Mcconnell Robert S Method and system for tracking carbon credits and other carbon valuation units
US20080228774A1 (en) 2007-03-15 2008-09-18 Accenture Global Services Gmbh Collaboration system
US20090006411A1 (en) 2007-03-27 2009-01-01 Slc Consultants, Inc. Strategic Business Management System
US20090273571A1 (en) 2008-05-01 2009-11-05 Alan Bowens Gesture Recognition
US20100257457A1 (en) 2009-04-07 2010-10-07 De Goes John A Real-time content collaboration
US20120019488A1 (en) * 2009-12-14 2012-01-26 Mccarthy John P Stylus for a touchscreen display
US20110163944A1 (en) 2010-01-05 2011-07-07 Apple Inc. Intuitive, gesture-based communications with physics metaphors
US20110221656A1 (en) 2010-02-28 2011-09-15 Osterhout Group, Inc. Displayed content vision correction with electrically adjustable lens
US20110276589A1 (en) 2010-05-03 2011-11-10 Smith Robin Y Systems, methods, and apparatus for processing documents to identify structures
US8433723B2 (en) 2010-05-03 2013-04-30 Cambridgesoft Corporation Systems, methods, and apparatus for processing documents to identify structures
WO2011140148A1 (en) 2010-05-03 2011-11-10 Cambridgesoft Corporation Method and apparatus for processing documents to identify chemical structures
US20130218878A1 (en) 2010-05-03 2013-08-22 Cambridgesoft Corporation Systems, methods, and apparatus for processing documents to identify structures
US20120078853A1 (en) 2010-09-21 2012-03-29 Cambridgesoft Corporation Systems, methods, and apparatus for facilitating chemical analyses
US8538983B2 (en) 2010-09-21 2013-09-17 Cambridgesoft Corporation Systems, methods, and apparatus for facilitating chemical analyses
US20120110486A1 (en) 2010-10-01 2012-05-03 Imerj LLC Universal clipboard
US20120188147A1 (en) 2010-12-31 2012-07-26 Ebay, Inc. Methods and systems for displaying content on multiple networked devices with a simple command
US20120173622A1 (en) 2011-01-04 2012-07-05 Samsung Electronics Co., Ltd. Social screen casting
US20120246228A1 (en) 2011-03-21 2012-09-27 Microsoft Corporation Sharing channels
US20120284638A1 (en) 2011-05-06 2012-11-08 Kibits Corp. System and method for social interaction, sharing and collaboration
US20120311038A1 (en) 2011-06-06 2012-12-06 Trinh Trung Tim Proximity Session Mobility Extension
US20120324368A1 (en) 2011-06-14 2012-12-20 Logmein, Inc. Object transfer method using gesture-based computing device
GB2493830A (en) 2011-08-17 2013-02-20 Integrated Chemistry Design Inc Using finger gestures to create and modify a chemical structure on a touch screen device
US20130044042A1 (en) 2011-08-18 2013-02-21 Google Inc. Wearable device with input and output structures
WO2013126077A1 (en) 2012-02-24 2013-08-29 Cambridgesoft Corporation Systems, methods, and apparatus for drawing chemical structures using touch and gestures
US20130222265A1 (en) 2012-02-24 2013-08-29 Robin Young Smith Systems, Methods, and Apparatus for Drawing Chemical Structures Using Touch and Gestures

Non-Patent Citations (42)

* Cited by examiner, † Cited by third party
Title
Algorri et al. Reconstruction of Chemical Molecules from Images, 2007 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '07), Lyon, France, Aug. 22-26, 2007, in Conjunction with the Biennial Conference of the Societe Francaise de Genie Biologique et Medical (SFGB), Aug. 22, 2007, pp. 4609-4612.
Australian Patent Application No. 2011248243, APO Examination Report No. 1, issued Nov. 5, 2013, 3 pages.
Ben Shine and Dana Tenneson, "ChemPad3-A Tutorial", May 21, 2008. *
Ben Shine and Dana Tenneson, "ChemPad3—A Tutorial", May 21, 2008. *
Bennett, Samsung's AllShare Play pushes pictures from phone to cloud and TV, <http://news.cnet.com/8301-1035-3-57391735-94/samsungs-allshare-play-pushes-pictures-from-phone-to-cloud-and-tv/> [retrieved Oct. 24, 2013], Mar. 6, 2012, 9 pages.
Bennett, Samsung's AllShare Play pushes pictures from phone to cloud and TV, <http://news.cnet.com/8301-1035—3-57391735-94/samsungs-allshare-play-pushes-pictures-from-phone-to-cloud-and-tv/> [retrieved Oct. 24, 2013], Mar. 6, 2012, 9 pages.
Carmigniani, J. et al., Augmented Reality Technologies, Systems and Applications, Multimedia Tools and Applications 51:341-377, (2011).
Casey et al. Optical Recognition of Chemical Graphics, Document Analysis and Recognition, 1993, Proceedings of the Second International Conference on, Tsukuba Science City, Japan, Oct. 20-22, 1993, Los Alamitos, CA, USA, IEEE Comput. Soc., Oct. 20, 1993, pp. 627-631.
Filippov et al. Optical Structure Recognition Software to Recover Chemical Information: OSRA, An Open Source Solution, Journal of Chemical Information and Modeling, vol. 49, No. 3, Mar. 23, 2009, pp. 740-743.
First Office Action for Chinese Application No. 201190000597.X, mailed May 29, 2013, 4 pages Including Translation.
Flick-Simply the easiest way to share, [retrieved Aug. 23, 2013], 4 pages.
Flick—Simply the easiest way to share, <http://getflick.io/> [retrieved Aug. 23, 2013], 4 pages.
Furlan, Rod, Build Your Own Google Glass, Resources Hands On, IEEE Spectrum, IEEE Inc., vol. 50, No. 1, pp. 20-21, (Jan. 1, 2013).
Gonzalez-Villanueva et al., WallShare: A Collaborative Multi-pointer System for Portable Devices, Nov. 19, 2012, 7 pages.
International Search Report for PCT Application No. PCT/US2011/035070, mailed Oct. 6, 2011, 4 pages.
International Search Report for PCT/US2014/016249, 4 pages, Aug. 13, 2014.
International Search Report, PCT/US2012/026574, dated Mar. 20, 2013, 4 pgs.
International Written Opinion for PCT/US2014/016249, 7 pages, Aug. 13, 2014.
iTunes Preview, Flick for iPhone, iPad, and iPod touch on the iTunes App Store, , [retrieved Oct. 28, 2013], 2 pages.
iTunes Preview, Flick for iPhone, iPad, and iPod touch on the iTunes App Store, <https://itunes.apple.com/us/app/flick./id644265534?mt=8>, [retrieved Oct. 28, 2013], 2 pages.
Jurach, T., Microsoft Outlook Quick Start Email Guide!, 1-3 (2006).
Kim, et al, Development of a Gesture-Based Molecular Visualization Tool Based on Virtual Reality for Molecular Docking, Bull. Korean Chem. Soc. 2004, vol. 25, No. 10 pp. 1571-1574.
Layar, What is Layar?, [retrieved Nov. 14, 2012], 7 pages.
Layar, What is Layar?, <http://www.layar.com/features/> [retrieved Nov. 14, 2012], 7 pages.
Lorensen et al., Marching Cubes: A high resolution 3D surface construction algorithm. In: Computer Graphics, vol. 21, Nr. 4, Jul. 1987.
Lucero et al., Pass-Them-Around: Collaborative Use of Mobile Phones for Photo Sharing, CHI 2011-Session: Photo Sharing, May 7-11, 2011, Vancouver, BC, Canada, 10 pages.
Lucero et al., Pass-Them-Around: Collaborative Use of Mobile Phones for Photo Sharing, CHI 2011—Session: Photo Sharing, May 7-11, 2011, Vancouver, BC, Canada, 10 pages.
Park et al. Automated Extraction of Chemical Structure Information From Digital Raster Images, Chemistry Central Journal, Biomed Central Ltd., vol. 3, No. 1, Feb. 5, 2009, pp. 1-16.
Park et al. Tunable Machine Vision-Based Strategy for Automated Annotation of Chemical Databases, Journal of Chemical Information and Modeling, vol. 59, No. 4, Apr. 27, 2009, pp. 780-787.
Pering et al., Enabling Pervasive Collaboration with Platform Composition, Intel Research Santa Clara, 2009, 18 pages.
Pering et al., Spontaneous Marriages of Mobile Devices and Interactive Spaces, Communications of the ACM, Sep. 2005, vol. 48, No. 9, pp. 53-59, 7 pages.
Scheible et al., MobiToss: A Novel gesture based interface for creating and sharing mobile multimedia art on large public displays, MM'08, Oct. 26-31, 2008 Vancouver British Columbia, Canada, pp. 957-960, 4 pages.
Tsotsis, Word Lens Translates Words Inside of Images. Yes Really., [retrieved Nov. 14, 2012], Dec. 16, 2010, 3 pages.
Tsotsis, Word Lens Translates Words Inside of Images. Yes Really., <http://techcrunch.com/2010/12/16/world-lens-translates-words-inside-of-images-yes-really/> [retrieved Nov. 14, 2012], Dec. 16, 2010, 3 pages.
Valko et al. CLiDE Pro: The Latest Generation of CLiDE, a Tool for Optical Chemical Structure Recognition, Journal of Chemical Information and Modeling, vol. 94, No. 4, Apr. 27, 2009, pp. 780-787.
Weinberg et al., ZooZBeat: a Gesture-based Mobile Music Studio, NIME 2009, pp. 312-315, 4 pages.
Williams, et al, Smart Phones, a Powerful Tool in the Chemistry Classroom, Journal of Chemical Education, 2011, pp. 683-686.
Williams, et al., Mobile apps for chemistry in the world of drug discovery, Drug Discovery Today, vol. 16. Nos. 21/22, Nov. 2011, pp. 928-939.
Wobbrock et al, User-Defined Gestures for Surface Computing, CHI-Tabletop Gestures, Apr. 7, 2009, pp. 1083-1092.
Wobbrock et al, User-Defined Gestures for Surface Computing, CHI—Tabletop Gestures, Apr. 7, 2009, pp. 1083-1092.
Written Opinion for PCT Application No. PCT/US2011/035070, mailed Oct. 6, 2011, 9 pages.
Written Opinion, PCT/US2012/026574, dated Mar. 20, 2013, 8 pgs.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120137207A1 (en) * 2010-11-29 2012-05-31 Heinz Christopher J Systems and methods for converting a pdf file
US9251123B2 (en) * 2010-11-29 2016-02-02 Hewlett-Packard Development Company, L.P. Systems and methods for converting a PDF file
US11164660B2 (en) 2013-03-13 2021-11-02 Perkinelmer Informatics, Inc. Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
US9751294B2 (en) 2013-05-09 2017-09-05 Perkinelmer Informatics, Inc. Systems and methods for translating three dimensional graphic molecular models to computer aided design format
US10572545B2 (en) 2017-03-03 2020-02-25 Perkinelmer Informatics, Inc Systems and methods for searching and indexing documents comprising chemical information

Also Published As

Publication number Publication date
EP2973083B1 (en) 2020-09-09
WO2014163777A1 (en) 2014-10-09
US20150112604A1 (en) 2015-04-23
US20140267240A1 (en) 2014-09-18
EP2973083A1 (en) 2016-01-20
US11164660B2 (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11164660B2 (en) Visually augmenting a graphical rendering of a chemical structure representation or biological sequence representation with multi-dimensional information
US11841895B2 (en) Processing the results of multiple search queries in a mapping application
KR102416985B1 (en) Virtual vision system
KR102317167B1 (en) Duplicate Tracking System
US10275122B2 (en) Semantic card view
US20130344899A1 (en) Presenting information for a current location or time
US20220038403A1 (en) Media item attachment system
US11301500B2 (en) Clustering for geo-enriched data
KR20210031768A (en) Content suggestion system
KR20220149619A (en) Shared Augmented Reality System
US11335060B2 (en) Location based augmented-reality system
US20200210703A1 (en) Dynamic contextual media filter
US10199013B2 (en) Digital image comparison
US20240045899A1 (en) Icon based tagging
US20200005065A1 (en) Object classification for image recognition processing
US11367249B2 (en) Tool for viewing 3D objects in 3D models
US9135317B2 (en) Image-centric documents
US10673945B2 (en) Framework for data geocoding

Legal Events

Date Code Title Description
AS Assignment

Owner name: CAMBRIDGESOFT CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SMITH, ROBIN Y.;REEL/FRAME:031509/0368

Effective date: 20131023

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: PERKINELMER INFORMATICS, INC., MASSACHUSETTS

Free format text: CHANGE OF NAME;ASSIGNOR:CAMBRIDGESOFT CORPORATION;REEL/FRAME:035333/0851

Effective date: 20150212

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: REVVITY SIGNALS SOFTWARE, INC., MASSACHUSETTS

Free format text: CHANGE OF NAME;ASSIGNOR:PERKINELMER INFORMATICS, INC.;REEL/FRAME:066797/0597

Effective date: 20230707