WO2004070653A2 - Image analysis system and method - Google Patents

Image analysis system and method Download PDF

Info

Publication number
WO2004070653A2
WO2004070653A2 PCT/US2004/002633 US2004002633W WO2004070653A2 WO 2004070653 A2 WO2004070653 A2 WO 2004070653A2 US 2004002633 W US2004002633 W US 2004002633W WO 2004070653 A2 WO2004070653 A2 WO 2004070653A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
sample
pixels
imaging
information
Prior art date
Application number
PCT/US2004/002633
Other languages
French (fr)
Other versions
WO2004070653A3 (en
Inventor
Rhett L. Affleck
Robert K. Levin
John E. Lillig
Robert K. Neeper
William R. Ewing
Duane Desieno
Eric Hansen
Mike Bodnar
Original Assignee
Discovery Partners International
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Discovery Partners International filed Critical Discovery Partners International
Publication of WO2004070653A2 publication Critical patent/WO2004070653A2/en
Publication of WO2004070653A3 publication Critical patent/WO2004070653A3/en

Links

Classifications

    • CCHEMISTRY; METALLURGY
    • C30CRYSTAL GROWTH
    • C30BSINGLE-CRYSTAL GROWTH; UNIDIRECTIONAL SOLIDIFICATION OF EUTECTIC MATERIAL OR UNIDIRECTIONAL DEMIXING OF EUTECTOID MATERIAL; REFINING BY ZONE-MELTING OF MATERIAL; PRODUCTION OF A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; SINGLE CRYSTALS OR HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; AFTER-TREATMENT OF SINGLE CRYSTALS OR A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; APPARATUS THEREFOR
    • C30B29/00Single crystals or homogeneous polycrystalline material with defined structure characterised by the material or by their shape
    • C30B29/54Organic compounds
    • C30B29/58Macromolecular compounds
    • CCHEMISTRY; METALLURGY
    • C30CRYSTAL GROWTH
    • C30BSINGLE-CRYSTAL GROWTH; UNIDIRECTIONAL SOLIDIFICATION OF EUTECTIC MATERIAL OR UNIDIRECTIONAL DEMIXING OF EUTECTOID MATERIAL; REFINING BY ZONE-MELTING OF MATERIAL; PRODUCTION OF A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; SINGLE CRYSTALS OR HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; AFTER-TREATMENT OF SINGLE CRYSTALS OR A HOMOGENEOUS POLYCRYSTALLINE MATERIAL WITH DEFINED STRUCTURE; APPARATUS THEREFOR
    • C30B7/00Single-crystal growth from solutions using solvents which are liquid at normal temperature, e.g. aqueous solutions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/251Colorimeters; Construction thereof
    • G01N21/253Colorimeters; Construction thereof for batch operation, i.e. multisample apparatus
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/02Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
    • G01N35/028Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations having reaction cells in the form of microtitration plates
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/0016Technical microscopes, e.g. for inspection or measuring in industrial production processes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L9/00Supporting devices; Holding devices
    • B01L9/52Supports specially adapted for flat sample carriers, e.g. for plates, slides, chips
    • B01L9/523Supports specially adapted for flat sample carriers, e.g. for plates, slides, chips for multisample carriers, e.g. used for microtitration plates
    • G01N15/1433
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N1/00Sampling; Preparing specimens for investigation
    • G01N1/28Preparing specimens for investigation including physical details of (bio-)chemical methods covered elsewhere, e.g. G01N33/50, C12Q
    • G01N1/40Concentrating samples
    • G01N1/4022Concentrating samples by thermal techniques; Phase changes
    • G01N2001/4027Concentrating samples by thermal techniques; Phase changes evaporation leaving a concentrated sample
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/10Investigating individual particles
    • G01N15/14Electro-optical investigation, e.g. flow cytometers
    • G01N2015/1493Particle size
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N2035/00346Heating or cooling arrangements
    • G01N2035/00356Holding samples at elevated temperature (incubation)
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N2035/00346Heating or cooling arrangements
    • G01N2035/00455Controlling humidity in analyser
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/00584Control arrangements for automatic analysers
    • G01N35/00722Communications; Identification
    • G01N35/00871Communications between instruments or with remote terminals
    • G01N2035/00881Communications between instruments or with remote terminals network configurations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/02Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
    • G01N35/04Details of the conveyor system
    • G01N2035/0401Sample carriers, cuvettes or reaction vessels
    • G01N2035/0418Plate elements with several rows of samples
    • G01N2035/042Plate elements with several rows of samples moved independently, e.g. by fork manipulator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/02Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
    • G01N35/04Details of the conveyor system
    • G01N2035/0401Sample carriers, cuvettes or reaction vessels
    • G01N2035/0418Plate elements with several rows of samples
    • G01N2035/0425Stacks, magazines or elevators for plates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/02Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
    • G01N35/04Details of the conveyor system
    • G01N2035/046General conveyor features
    • G01N2035/0462Buffers [FIFO] or stacks [LIFO] for holding carriers between operations
    • G01N2035/0463Buffers [FIFO] or stacks [LIFO] for holding carriers between operations in incubators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/0099Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor comprising robots or similar manipulators

Definitions

  • This invention generally relates to systems and methods for analyzing and exploiting images. More particularly, the invention relates to systems and methods for identifying and analyzing images of substances in samples. Description of the Related Technology
  • X-ray crystallography is used to determine the three-dimensional structure of macromolecules, e.g., proteins, nucleic acids, etc.
  • This technique requires the growth of crystals of the target macromolecule.
  • crystal growth of macromolecules is dependent on several environmental conditions, e.g., temperature, pH, salt, and ionic strength.
  • growing crystals of macromolecules requires identifying the specific environmental conditions that will promote crystallization for any given macromolecule.
  • it is insufficient to find conditions that result in any type of crystal growth; rather, the objective is to determine those conditions that yield well-diffracting crystals, i.e., crystal configurations that provide the resolution desired to make the data useful.
  • an image may be periodically generated for each sample and provided to a technician, who need not be geographically co-located with the sample, to analyze the image to evaluate crystal growth.
  • Automated image evaluation techniques can also be used to analyze the image and evaluate the presence of crystal growth and increase system throughput.
  • current image analysis techniques do not always receive sufficient information from the sample image to accurately evaluate crystal growth. Important information learned as a result of analyzing the image is not automatically exploited, or used for further analysis to facilitate a user's evaluation of the image. Additionally, in current systems, the results of analyzing the image are not adequately provided to facilitate easy interpretation and efficient decision making.
  • the invention comprises a method of evaluating crystal growth in a crystal growth system, comprising receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, and analyzing information depicted in said second image to determine the contents of said sample.
  • the different imaging parameter included in the method can be depth-of-field, illumination brightness level, focus, the area imaged, the center location of the area imaged, illumination source type, magnification, polarization, and/or illumination source position.
  • analyzing said first image comprises determining a region of interest in said first image and wherein said information comprises is used to adjust said second set of imaging parameters so that the imaging system generates a zoomed-in second image of said region on interest.
  • analyzing mformation in the method of evaluating crystal growth in a crystal growth system comprises determining whether said first image depicts the presence of crystals, and can further comprise, wherein said first image comprises pixels, and said determining comprises classifying said pixels and comparing the number of pixels classified as crystals to a threshold value.
  • a method of evaluating crystal growth in a crystal growth system comprises counting the number of said pixels depicting objects in the sample and evaluating said number using a threshold value.
  • the method of analyzing crystal growth comprises receiving a first image having pixels depicting crystal growth information of a sample, identifying a first set of pixels in said first image comprising a first region of interest, receiving a second image having pixels depicting crystal growth information of said sample, identifying a second set of pixels in said second image comprising a second region of interest, merging said first set of pixels and said second set of pixels to form a composite image, and analyzing said composite image to identify crystal growth information of said sample.
  • said first image is generated by an imaging system using a first set of imaging parameters
  • said second image is generated by said imaging system using a second set of imaging parameters
  • said second set of imaging parameters comprises at least one imaging parameter that is different from the imaging parameters in said first set of imaging parameters
  • a method of analyzing crystal growth information comprises receiving a first image comprising a set of pixels that depict the contents of a sample, determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification, generating a second image based on said, information and said set of pixels, displaying said second image, and visually analyzing said second image to determine crystal growth information of the sample.
  • the invention comprises a system for detecting crystal growth information comprising an imaging subsystem with means for generating an image of a sample, wherein said image comprises pixels that depict the content of said sample, an image analyzer subsystem coupled to said imaging system with means for receiving said image, means for classifying the content of said sample using said pixels and means for determining whether said sample should be re-imaged based on said classifying; and a scheduler subsystem coupled to said imaging analyzer system with means for causing said imaging subsystem to re- image said sample.
  • the invention comprises a computer- readable medium containing instructions for analyzing samples in a crystal growth system, by receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, analyzing information depicted in said second image to determine the contents of said sample.
  • Figure 1 A is a high-level block diagram of an imaging system according to the invention.
  • Figure IB is high-level block diagram of another imaging system according to the invention.
  • Figure 2 is a perspective view of an imaging system according to the invention.
  • Figure 3 is a perspective view of the imaging system shown in Figure 2, viewed from a different angle.
  • Figure 4 is a perspective view of the imaging system shown in Figure 2, viewed from yet a different angle.
  • Figure 5 is a plan front view of the imaging system shown in Figure 2.
  • Figure 6 is a plan, right side view of the imaging system shown in Figure 2.
  • Figures 7A and 7B are perspective views from different angles of a lens system as can be used with the imaging system shown in Figure 2.
  • Figure 8 is a perspective view from below of a photo-filter carriage that can be used with the imaging system shown in Figure 2.
  • Figure 9 is a perspective view of certain components as assembled in the imaging system shown in Figure 2.
  • Figure 10 is a plan front view of certain components as assembled in the imaging system shown in Figure 2.
  • Figure 11 is a plan, right side view of the components shown in Figure 10.
  • Figure 12 is a perspective view of a light source as can be used with the imaging system shown in Figure 2.
  • Figure 13 is a perspective view of a sample mount with the light source shown in Figure 12, viewed from a different angle.
  • Figure 14A is a plant top view of the light source shown in Figure 12.
  • Figure 14B is a cross-sectional view along the plane A-A of the light source shown in Figure 14 A.
  • Figure 15 is a exploded, perspective view of certain components of the sample mount and the light source shown in Figure 13.
  • Figure 16 is a functional block diagram of an illumination duration control circuit as can be used with the light source shown in Figure 12.
  • Figure 17 is a functional block diagram of an automated sample analysis system in which the imaging system according to the invention can be used.
  • Figure 18 is a block diagram of an imaging and analysis system.
  • Figure 19 is a block diagram of a computer that includes a Crystal Resolve analysis module, according to one aspect of the invention.
  • Figure 20A is a block diagram of an analysis system process, according to one embodiment of the invention.
  • Figure 20B is a block diagram of an analysis system process, according to one embodiment of the invention.
  • Figure 21 is a flow diagram of an imaging analysis process, according to one embodiment of the invention.
  • Figure 22 is a flow diagram of an imaging analysis and control process, according to one embodiment of the invention.
  • Figure 23 is a flow diagram of an analysis process, according to one embodiment of the invention.
  • the imaging and analysis system and methods disclosed here are related to embodiments of an automated sample analysis system having an imaging system that is described in the related U.S. provisional patent application No. 60/444,519, entitled “AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD.”
  • An imaging system that can provide images of samples for analysis, in response to control information, is described hereinbelow, followed by a description of a system and processes for analyzing the images.
  • image image
  • sample refers to any type of suitable sample, for example, drops, droplets, the contents of a well, the contents of a capillary, a sample in gel or any other embodiment of containing a sample or material.
  • Figure 1A is a high-level block diagram of an imaging system 100.
  • the imaging system 100 has an assembly 105 that is controlled by controllers and logic 110.
  • the assembly 105 includes a stage 115 that holds and transports target samples to be imaged by an image capture device 120.
  • the imaging system 100 employs an optics assembly 125 to enhance the view of the target samples before the image capture device 120 obtains the images of the samples.
  • An illuminator 130 is configured as part of the assembly 105 to direct light at the samples held in the stage 115.
  • the assembly 105 also includes a translator 135 that provides the structural support members and actuators to move any one combination of the stage 115, image capture device 120, optics 125, or illuminator 130.
  • the translator 135 may be configured to move the combination of components in one, two, or three dimensions.
  • the stage 115 remains stationary while the translator 135 moves the image capture device 120 and optics 125 to a desired well position in a sample plate held by the stage 115.
  • the translator 135 moves the stage 115 in a first axis and the image capture device 120 and optics 125 in a second axis which is substantially perpendicular to the first axis.
  • the controllers and logic 110 of the imaging system 100 provide instructions to and coordinate the activities of the components of the assembly 105.
  • the controllers may include a microprocessor, controller, microcontroller, or any other computing device.
  • the logic includes the instructions to cause the controller to perform the tasks or processing described here.
  • Figure IB is high-level block diagram of an imaging system 150.
  • the imaging system 150 includes an assembly 155 in communication with controllers and logic 160.
  • the assembly 155 may also be in communication with a data storage device 190, which itself may be configured for communication with the controllers and logic 160.
  • the controllers and logic 160 control and coordinate the activities of the components of the assembly 155.
  • the assembly 155 includes a sample plate mount 165 suitably configured to receive micro-titer plates of various configurations and sizes.
  • the sample plate mount 165 can be configured to receive any sample matrix that carries samples, regardless of whether the samples are stored in individuals sample wells, rest on the surface of the sample matrix (e.g., as droplets), or are embedded in the sample matrix.
  • a source of flash lighting 180 is arranged to direct light bursts to the samples stored in the micro-titer plate carried by the sample plate mount 165. An inventive system and method of providing the flash lighting 180 will be discussed with reference to Figure 16.
  • the assembly 155 includes a compound lens 175 that cooperates with a digital camera 170 to acquire images of the samples in the sample plate.
  • the compound lens 175 may consist, for example, of an objective lens, a zoom lens, and additional optics chosen to provide the digital camera 170 with the desired image from the light from the samples.
  • the compound lens 175 may be motorized (i.e., provided with one or more actuators) so that the controllers and logic 160 can automatically focus the scene, zoom on the scene, and set the aperture.
  • the assembly 155 includes an x-y translator that moves either the sample plate mount 165 or the compound lens 175, or both.
  • the x-y translator moves both the digital camera 170 and the compound lens 175.
  • the x-y translator 185 is configured to move the sample plate mount 165 in two axis, e.g., x and y coordinates.
  • the x-y translator 185 moves the compound lens 175 in two axis, while the sample plate mount 165 remains stationary.
  • the x-y translator consists of multiple and separate actuators that move independently from one another the sample mount 165 or the compound lens 175.
  • the assembly 155, the controllers and logic 160, and the data storage 190 are depicted as separate components for schematic purposes only. That is, in some embodiments of the imaging system 150 it is advantageous to, for example, integrate the data storage device 190 into the assembly 155 and to include the controllers and logic 160 as part of one or more of the components shown as being part of the assembly 155. Similarly, the sample mount 165, digital camera 170, compound lens 175, flash lighting 180, and x-y translator 185 need not all be configured as part of a single assembly 155 as shown.
  • the imaging system 200 includes a sample plate mount 210 that receives a sample plate 212.
  • An x-translator having an actuator 218 (see Fig. 4) is coupled to the sample plate mount 210 to move the sample plate mount 210 into position above a light source 216 and below a lens assembly 230.
  • a digital camera 214 is coupled to the lens assembly 230 to capture images of the wells in the sample plate 212.
  • a y-translator having an actuator 220 is coupled to the lens assembly 230 to move the lens assembly 230 into position over a desired well of the sample plate 212.
  • the digital camera 214, lens assembly 230, sample plate mount 210, light source 216, x-translator 218, and y-translator 220 are mounted on a platform 240 (see Fig. 2).
  • the platform 240 generally consists of several structural members, brackets, or walls, e.g., base 242, side wall 244, front wall 250, bracket 252, bracket 246, post 248, and support member 254.
  • the light source 216 can be fastened to the base 242.
  • Rails 256 and 258, which support the lens assembly 230 are fastened to the wall 250 of the platform 240 and to the support member 254.
  • the sample plate mount 210 is supported by a rail 262 and an outport guide 253 of the support member 254.
  • the rail 262 is supported through attachment to the side wall 244 and the post 248.
  • the platform 240 may be constructed of any of several suitable materials, including but not limited to, aluminum, steel, or plastics. Because in some applications it is critical to keep vibration of the platform 240 to a minimum, materials that provide rigidity to the platform 240 are preferred in such applications.
  • the rails 256, 258, and 262 these are preferably manufactured with very smooth surfaces to carry the lens assembly 230 or the sample plate mount 210 in a smooth fashion, thereby avoiding vibrations.
  • supporting the lens assembly 230 may be done by coupling the linear plain bearings 264 and 266 to the rails 256 and 258.
  • a similar coupling using a "bushing" 267 may be employed to fasten the sample plate mount 210 to the rail 262.
  • Bearings 264, 266, and 267 are chosen to provide smooth bearing surfaces for smooth translation of the load, e.g., the lens assembly 230 or the sample plate mount 210.
  • the sample plate mount 210 may be constructed from any rigid material, e.g., steel, aluminum, or plastics. Preferably the sample plate mount 210 is configured to accommodate, either directly or through the use of adapters, various standard sizes of micro-titer plates. Micro- titer plates that may be used with the sample plate mount 210 include, but are not limited to, crystallography plates manufactured by Linbro, Douglas, Greiner, and Corning. As will be described further below, the sample plate mount 210 is coupled to an actuator 218 for moving the sample plate mount 210 in one axis. Translators
  • the imaging system 200 includes two independent translators. Typically, the sample plate mount 210 and the lens assembly 230 move on a plane that is substantially parallel to a plane defined by the sample plate 212 carried by the sample plate mount 210. In one embodiment, the controllers and logic 110 or 160 can control x-, y-translators to position the sample plate mount 210 and the lens assembly 230 at the coordinates of a specific well of the sample plate 212.
  • An x-axis translator for moving the sample plate mount 210 consists of an actuator 218 (see Fig. 4) that rotates a threaded rod 219 (or “lead screw”) about its axis in clockwise or counter-clockwise directions.
  • the actuator 218 is coupled to the rod 219 via a belt (not shown) and pulleys 221 and 221 '.
  • the sample plate 210 mount is fastened to a "bushing" 267 (see Fig. 10) that rides on the rail 262.
  • the sample plate mount 210 is also supported by the outport guide 253 (see Figs. 6 and 11) of the support member 254.
  • the “bushing” 267 is additionally coupled in a known manner to the rod 219.
  • the actuator 218 turns in one direction, its power is transmitted via the belt and pulleys 221 and 221' to the rod 219, which then moves the "bushing" 267 and, thereby, moves the sample plate mount 210 in a linear direction.
  • a y-axis translator for moving the lens assembly 230 consists of an actuator
  • the actuator 220 (see Fig. 3) that rotates a threaded rod 260 about its axis in clockwise or counter-clockwise directions.
  • the actuator 220 is coupled to the rod 260 through a slotted disc coupling (not shown).
  • the lens assembly 230 is coupled to bearings 264 and 266 that respectively ride on rails 256 and 258.
  • the bearings 264 and 266 are coupled to the rod 260 through plate 255 and the bracket 257 (see Fig. 6) in a known manner.
  • the actuator 220 turns in one direction, its power is transmitted via the slotted disc coupling to the rod 260, which then moves the bearings 264 and 266 and, thereby, moves the lens assembly in a linear direction.
  • the actuators 218 and 220 may be direct current gear motors or 3-phase servo motors, for example.
  • the type of motors employed as the actuators 218 or 220 will depend on, among other things, the weight of the sample plate mount 210 plus sample plate 212 or the lens assembly 230 and the digital camera 214. Another factor in determining the type of motor is the desired speed.
  • actuators 218 and 220 having a positioning precision of 10-microns are used. Suitable motors may be obtained from PITMANN® of Harleysville, Pennsylvania.
  • each translator mechanism independently translates along an axis of motion each of the sample plate mount 210 and the lens assembly 230.
  • the imaging system 200 may be configured so that an x-y translator (or set of x-, y-translators) moves the lens assembly in the x-y coordinate area, while the sample plate mount 210 remains stationary over the light source 216.
  • the x-, y-translators employ optical sensors 285 and 287 (see Fig. 5) as to sense the start or end positions ("home positions") of the lens assembly 230 or the sample plate mount 210.
  • the imaging system 200 may also include a z-axis translator (not shown) to lift or lower the sample plate mount 210, lens assembly 230, or light source 216.
  • the z-axis translator may consist of, for example, an actuator, a lead screw, one or more rails, and appropriate bearings and fasteners.
  • the actuators 218 and 220 may be governed by a controller (not shown).
  • Suitable controllers may be obtained from J R Kerr Automation Engineering of Flagstaff, Arizona.
  • the controller may be configured to interpret high level commands from a computing device.
  • the controller causes the actuator 220, for example, to move and keeps count of the travel distance and final location.
  • the controller can be programmed to move the actuator 220 at varying speed, torque, and acceleration.
  • the image capture device can be a film camera, a digital camera, a CMOS camera, a charge coupled device (CCD), and the like, or some other apparatus for capturing an image of an object.
  • the embodiments of the imaging system 200 described here employ a digital camera 214.
  • a suitable digital camera 214 is, for example, a CMOS digital camera.
  • the CMOS camera 214 is preferred because it provides random access to the image data and is relatively low cost.
  • a CMOS camera is typically not used because in those systems the level of light is insufficient for this type of camera.
  • the imaging system 200 is configured to provide the level of light necessary to allow use of a CMOS camera.
  • the digital camera 214 can be a CMOS camera having a pixel resolution of
  • the digital camera 124 may be fully digital and not require a frame grabber.
  • the digital camera 124 may also have a centered pixel area, e.g. a 1024 x 1024 or 800x600 pixel subset of the array, which enhances the image quality since the edges of the array where optical distortions increase are avoided.
  • the digital camera 214 is connected separately to a host computer (not shown) via a Firewire data interface. This allows for rapid transfer of large amounts of image data, e.g., five images per second.
  • One embodiment of the lens assembly 230 includes an objective lens 231, a zoom lens 233, and an adapter 235. These optical components are chosen to provide suitable field of view, magnification, and image quality.
  • the objective lens 231, zoom lens 233, and adapter 235 may be purchased from, for example, Navitar Inc. of Rochester, New York.
  • the zoom lens 233 may be the "12X UltraZoom" zoom lens manufactured by Navitar.
  • the zoom lens 233 may provide a 12:1 zoom factor, a focus range of about 12-mm, and an aperture of about 0.14.
  • the zoom lens 233 preferably includes adapters for mounting the objective lens 231.
  • the zoom lens 233 may have actuators 233A, 233B, and 233 C for providing, respectively, automatic aperture adjustment, autozoom, and autofocus functionality.
  • actuators 233B and 233 C have gear reductions of 262:1.
  • the gear reduction ratio is chosen to suit the particular application. For example, a 5752:1 gear ratio for the focus actuator 233 C may be too slow for some applications of the imaging system 200.
  • the actuators 233A, 233B, and 233C may be obtained from Navitar or from MicroMo Electronics, Inc. of Clearwater, Florida.
  • the objective lens 231 may be, for example, a 5X Mitutoyo infinity Corrected Long Working Distance Microscope Objective (model M Plan Apo 5) microscope accessory.
  • the objective lens 231 is coupled to the zoom lens 233. Since the light source 216 delivers sufficient light to the sample plate 212, the lens assembly 230 is configured to allow for setting a small aperture in order to increase the depth of field.
  • the objective lens 231 preferably provides a working distance that allows adequate room beneath the lens assembly 230 to manipulate a sample plate 212 and provide a photo-filter carriage 237 in the image path. In one embodiment, the working distance of the objective lens 231 is about 34-mm.
  • the adapter 235 serves to allow use of the digital camera 214.
  • the adapter 235 may be, for example, a IX Adapter model number 1-6015 sold by Navitar.
  • IX Adapter model number 1-6015 sold by Navitar.
  • different combinations of objective lenses 231 and adapters 235 may be used, e.g., a 2X Adapter and 2X Objective combination.
  • the combination of IX Adapter and 5X Objective provides a suitable image for most applications of the imaging system 200.
  • the optical components of the lens assembly 230 can be provided with actuators for remote and automatic control.
  • controllers and control logic can control the actuators 233A, 233B, 233C, and 233D.
  • the actuators e.g., dc motors
  • the actuators 233A, 233B, 233C, and 233D are preferably provided with encoders to provide position information to the controllers.
  • the actuators on the lens assembly 230 are 17-mm direct current motors with 100:1 gear reducers. These motors may be obtained from PITMANN® of Harleysville, Pennsylvania.
  • the lens assembly 230 may also include a photo-filter carriage 237 that is configured to hold optical filters (not shown).
  • the photo-filter carriage 237 can hold polarization plates or color light filtering plates.
  • Figure 8 illustrates one embodiment of a photo- filter carriage 237 that may be used with the imaging system 200.
  • the photo-filter carriage 237 includes a filter wheel 237 A for receiving one or more photo-filters (not shown) in openings 237B.
  • the photo-filters may be held in place in the filter wheel 237A in a variety of ways. For example, in the embodiment illustrated in Figure 8, caps 237C in cooperation with suitable fasteners hold the photo-filters in place.
  • the filter wheel 237A may be coupled to an actuator 233D for remote and automatic control of the filter wheel 237A.
  • the actuator 233D and the filter wheel 237A may be fastened, in a conventional manner, to a clamp 237D that is coupled to, for example, the objective lens 231 or the zoom lens 233 (see Figs. 1 and 9).
  • a polarization filter is coupled to a filter wheel so that the polarization filter covers about 90 degrees of the wheel.
  • the polarization filter can be rotated so that the applied polarization varies between zero and ninety degrees.
  • the use of the polarization filter with a polarized light source can provide analysis of the effect of samples on polarized light. For example, when a polarized light source and the polarization filter are cross-polarized then minimal light should get to the objective lens 231, unless the sample re-orients the polarized light, such as can happen when the light passes through crystals.
  • the digital camera 214 in combination with the lens assembly 230 provides a broad depth of field to allow imaging of objects such as protein crystals at varying depths within a sample droplet stored in a sample well of a sample plate 212.
  • the lens assembly 230 has a 12:1 zoom lens and, in cooperation with the digital camera 214, can provide a 1 micron optical resolution.
  • the lens assembly 230 and the digital camera 214 may be integrated as a single assembly.
  • Figure 12 shows a perspective view of the light source 216. Since the crystallization of substances is often highly sensitive to temperature changes, the light source 216 is preferably configured to minimize the amount of heat transferred to the sample plate 212, e.g., by isolating and removing heat generated by the electronics 1408 and illuminators 1402 (see Fig. 14B). Housing
  • the light source 216 includes a housing 1202 adapted to store one or more illuminators 1402 (see Figs. 14B and 15), cooling elements 1404, heat reflecting glass 1406, light diffuser plate 1206, and corresponding electronics 1405 and 1408.
  • the housing 1202 consists of a plurality of walls that serve as structural support for the internal components and that substantially isolate the internal components from the external environment.
  • the housing 1202 can be constructed of a variety of materials including, but not limited to, stainless steel, aluminum, and hard plastics. A material with a low coefficient of heat transfer is preferred so as to substantially keep heat generated within the housing 1202 from reaching the outside through the walls of the housing 1202.
  • cooling elements 1404 are provided.
  • one or more of the internal surfaces of the walls of the housing 1202 may be coated with a suitable material that absorbs or reflects various types of radiation and prevents them from reaching the outside of the housing 1202.
  • the top wall 1204A of the housing 1202 has an opening to receive and support a light diffuser plate 1206.
  • the plate 1206 serves to diffuse light from the illuminators 1402 onto the sample plate 212.
  • the plate 1206 may be, for example, a sheet of translucent plastic.
  • a heat reflecting glass (“hot mirror") 1406 inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror") 1406 (see Fig. 14B) is provided inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror") 1406 (see Fig. 14B) is provided inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror") 1406 (see Fig. 14B) is provided inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror") 1406 (see Fig. 14B) is provided inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass (“hot mirror") 1406 (see Fig. 14B) is provided.
  • the wall 1204B of the housing 1202 may be provided with a plurality of orifices 1208 that allows a cooling element 1404, such as fan, to draw air into the housing 1202 for cooling the internal components.
  • a wall 1204C (see Fig. 14B) of the housing 1202 can be fitted with an opening 1410 for receiving a duct that guides forced air out of the housing 1202.
  • a wall 1204D (see Fig. 13) of the housing 1202 can be fitted with a power plug 1208 and a communications port 1302.
  • the housing 1202 is preferably adapted to isolate an operator of the imaging system 200 from high voltages that may be used to fire the illuminators 1402.
  • the housing 1202 may be configured in a variety of ways not limited to that detailed above.
  • the ventilation openings 1208 on wall 1204B may be replaced by one or more fans built into the wall 1204B or the wall 1204E.
  • the ventilation openings 1208 may be located on the bottom wall (not shown) of the housing 1202, for example.
  • the light source 216 includes one or more illuminators 1402 that generate light rays.
  • the illuminators 1402 may be various types, for example, incandescent bulbs, light emitting diodes, or fluorescent tubes of various types including, but not limited to, mercury- or neon-based fluorescent tubes.
  • the illuminators 1402 are two xenon tubes. Xenon tubes are well known in the relevant technology and are readily available.
  • the xenon tubes 1402 can include borosilicate glass that absorbs ultra-violate radiation. Xenon tubes are preferred because they produce sufficient light to allow use of a CMOS camera 214 in the imaging system 200. Xenon tubes are also preferred since they provide a broad spectrum of light rays, which enables use of color to enhance detection of crystal growth in the wells of the sample plate 212.
  • the actual dimensions of the illuminators 1402 are chosen to suit the specific application.
  • the xenon tubes 1402 are long enough to cover one dimension of the sample plate 212 so that it is not necessary to move the light source 216 when the lens assembly 230 or sample plate mount 210 are repositioned.
  • the illuminators 1402 may be supported on a board 1405, which may also support electronics for control of the illuminators 1402.
  • two illuminators 1402 are positioned to provide different locations of the illumination source, e.g., both on-axis and off-axis lighting of the wells in the sample plate 212.
  • the imaging axis of the lens assembly means the principal axes of the lens assembly.
  • first and second xenon tubes 1402 can be positioned, respectively, a first and second distance from the imaging axis of the lens assembly 230.
  • the first and second distances are substantially equal in length
  • the first xenon tube is positioned opposite the imaging axis from the second xenon tube.
  • the xenon tubes 1402 are mounted about an inch on either side of the area directly under the lens assembly 230.
  • This configuration allows the use of an indirect lighting effect when only one xenon tube is fired. That is, when two xenon tubes are positioned off the imaging axis, the controllers and logic 110 or 160 can control the tubes to provide on-axis or off-axis illumination of the sample plate 212.
  • One xenon tube can be fired to provide off-axis illumination of the sample plate 212.
  • off-axis illumination is preferred because it produces shadows on small objects in a sample droplet stored in a well of the sample plate 212.
  • the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212.
  • the imaging system 150 captures one image with the light source 216 lighting the sample with a first xenon tube.
  • the imaging system 150 captures a second image with the light source 216 lighting the sample with the second xenon tube.
  • the controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data. This results in enhanced characterization of the sample since the combination of the images typically provides more information about crystallization of the sample than a single image acquired with standard back lighting of the scene.
  • a source filter 270 ( Figure 2) may be inserted in a filter slot 272 so that the filter 270 is interposed between the light source 216 and the sample plate 212.
  • the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler.
  • the filter 270 may be automatically removed, or exchanged with another filter, by the imaging system 200.
  • the source filter 270 may be any type of filter, such as a wavelength specific filter (e.g. red, blue, yellow, etc.) or a polarization filter.
  • the light source 216 includes one or more illuminators 1402 (e.g., fluorescent tubes) adapted to provide flash lighting. That is the illuminators 1402 are controlled to illuminate only momentarily the sample plate 212 as the digital camera 214 captures an image of a well in the sample plate 212.
  • This arrangement provides benefits over known devices in which illuminators remain in the on-position throughout the entire time that the sample plate 212 is handled by an imaging system.
  • the imaging system 200 since the illuminators 1402 are turned on for only a fraction of a second per image, very little heat radiation is transferred to the wells of the sample plate 212.
  • one benefit of this configuration is that the imaging system 200 can provide high illumination levels for the camera 214 while minimizing energy or radiation transfer to the samples in the sample plate 210.
  • An exemplary control circuit 1600 that provides controlled flash lighting is described below with reference to Figure 16.
  • FIG 16 is a functional block diagram of an illumination duration ("flash") control circuit 1600 for an illuminator 1402.
  • flash illumination duration
  • the illuminator 1402 can be, for example, a xenon tube having a length greater than the maximum width of the sample plate 212 to be used in the imaging system 100, 150, or 200. By having such a dimension, the illuminator 1402 can be located underneath and along one axis of the sample plate 212 to illuminate all the wells in one row or column of the sample plate 212 without repositioning the illuminator 1402.
  • a first end of the illuminator 1402 is connected to a first capacitor 1602 and a first resistor 1604.
  • the opposite end of the first resistor 1604 is connected to a power supply 1606.
  • the power supply 1606 may be controlled by a dedicated RS232 line, for example.
  • the opposite or second end of the first capacitor 1602 that is not connected to the illuminator 1402 is connected to ground or a voltage common.
  • the second end of the illuminator 1402 is connected to the anode of a first silicon controlled rectifier ("SCR") 1607 and a first terminal of a second capacitor 1608, respectively.
  • An SCR is a solid state switching device that can provide fast, variable proportional control of electric power.
  • a resistor 1620 is connected between the first terminal of the second capacitor and the cathode of a second SCR 1610.
  • the second terminal of the second capacitor 1608 is connected to an anode of the second SCR 1610.
  • the cathode of the first SCR 1607 is connected to the ground or voltage common potential.
  • the cathode of the second SCR 1610 is connected to the cathode of the first SCR 1607 and is similarly connected to ground or the voltage common potential.
  • the anode of the second SCR 1610 is also connected to a second resistor 1614 that connects the anode of the second SCR 1610 to the power supply 1606.
  • a trigger 1612 of the illuminator 1402 is connected to the gate of the first SCR
  • This common connection controls the trigger 1612 of the illuminator 1402 and the start of illumination.
  • the gate of the second SCR 1610 controls a stop or end of illumination.
  • the duration of illumination provided by the illuminator 1402 can be controlled as follows. Initially, the first and second SCRs 1607 and 1610, respectively, are not conducting. The first capacitor 1602 is charged up to the level of the voltage of the power supply 1606 using the first resistor 1604. The power supply 1606 can, for example, charge the first capacitor to 300 volts or more.
  • the size of the first capacitor 1602 relates to the amount of energy that can be transferred to the illuminator 1 02.
  • the illuminator 1402 provides an illumination based in part on the amount of energy provided by the first capacitor 1602.
  • the first capacitor 1602 can be one capacitor or a bank of capacitors.
  • the first capacitor 1602 can be, for example, a 600 ⁇ F capacitor.
  • the size of the resistors 1620 and 1614 are determined in part by the desired voltage rise time on the second capacitor 1608. Smaller resistors 1620 and 1614 allow the second capacitor 1608 to charge quickly. However, the second SCR 1610 can inadvertently trigger if the voltage impulse at its anode is too great. Thus, the value of the resistors 1620 and 1614 are typically chosen to allow the second capacitor 1608 to recharge before the next image flash trigger, but not to recharge so quickly as to inadvertently trigger conduction in the second SCR 1610. [0091] The resistor 1620 provides an electrical path from the anode of the first SCR
  • the illuminator 1402 is ready to trigger once the first capacitor 1602 is charged.
  • the second capacitor 1608 is charged by the power supply 1606 through the second resistor 1614 concurrent with the charging of the first capacitor 1602.
  • the second capacitor 1608 is chosen to be large enough to generate a current potential that shuts off the first SCR 1607 and, thus, to terminate illumination by the illuminator 1402.
  • the second capacitor 1608 can be a single capacitor or can be a bank of capacitors.
  • the second capacitor 1608 can be, for example, a 20 ⁇ F capacitor.
  • the duration of illumination can be controlled.
  • the illuminator 1402 initially illuminates when the trigger signal is provided to the control of the illuminator 1402 and the gate of the first SCR 1607.
  • the illuminator 1402 can include a triggering circuit that triggers the illuminator 1402 in response to a logic signal. If the illuminator 1402 does not include this circuit, an external triggering circuit can be included.
  • the first SCR 1607 conducts in response to the trigger signal.
  • the first SCR 1607 conducts in response to the trigger signal.
  • the first SCR 1607 then continues to conduct even in the absence of a gate signal.
  • the first SCR 1607 can be shut off by interrupting the current through the SCR or by reducing the voltage drop across the first SCR 1607 to below the forward voltage of the device.
  • the second SCR 1610 is controlled by a stop signal generator 1616 to connect the second capacitor 1608 in parallel with the first SCR 1607. However, the second capacitor 1608 is charged in opposite polarity to the voltage drop across the first SCR 1607. Thus, when the second SCR 1610 initially conducts, the voltage from the second capacitor 1608 is placed in opposite polarity across the first SCR 1607 thereby shutting off the first SCR 1607.
  • the second end of the illuminator 1402 and the first terminal of the second capacitor 1608 are pulled to ground via the first SCR 1607.
  • the illuminator 1402 then illuminates in response to the current flowing through the illuminator 1402.
  • the second SCR 1610 controls turn-off of the illuminator 1402.
  • the second SCR 1610 begins to conduct when a stop signal is applied to the gate of the second SCR 1610. This pulls the second terminal of the second capacitor 1608 to ground. Because a capacitor resists instantaneous voltage changes, the voltage across the second capacitor
  • the illuminator 1402 shuts off when the first SCR 1607 turns off because there is no longer a current path through the illuminator 1402.
  • a microprocessor, controller, or microcontroller can be programmed to control the trigger 1612 and stop signal generator 1616.
  • the processor controls the trigger signal to initiate illumination with the illuminator 1402.
  • the processor then controls the stop signal to control termination of the illuminator 1402.
  • the processor can thus control the trigger and stop signals to control the duration of the illumination.
  • the processor can control the duration of the illumination (a "flash") in predetermined intervals or can control the duration of the illumination over a range of time. For example, the processor can control the duration of the flash in microsecond steps across an interval of approximately 20 ⁇ S - 600 ⁇ S.
  • the processor can control the lower range of the duration of the flash to be 0, 20, 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, or 550 ⁇ S.
  • the processor can control the upper range of the duration of the flash to be 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550 or 600 ⁇ S.
  • the digital camera 214 issues the signal to turn on the illuminator 1402 so that the "flash" will be in synchronization with the electronic shutter of the digital camera 214.
  • the power supply 1606 can be a controllable high voltage power supply.
  • the microprocessor, controller, or microcontroller can also control the output voltage of the power supply 1606 to further control the illumination provided by the illuminator 1402.
  • the microprocessor can control the output voltage of the power supply 1606 to vary the illumination provided by the illuminator 1402 for the same illumination duration.
  • the microprocessor can control the power supply 1606 to a lower output voltage to minimize the illumination.
  • the microprocessor can control the power supply 1606 to a higher output voltage, thereby increasing the illumination.
  • the microprocessor can control the output voltage of the power supply 1606 over a range of, for example, 180-300 volts.
  • the illuminator 1402 may not consistently illuminate for voltages below 180 volts when the illuminator 1402 is a xenon flash tube.
  • the microprocessor can control the output voltage of the power supply 1606 using a digital control word.
  • the microcontroller can control the output voltage of the power supply 1606 in steps determined in part by the number of bits in the control word and the tunable range of the power supply 1606.
  • the microcontroller can, for example provide a 10-bit control word, an 8-bit control word, a 6-bit control word, a 4-bit control word, or a 2-bit control word.
  • the power supply 1606 output voltage can be continuously variable over a predetermined range.
  • the microcontroller can control a level of illumination by controlling the illumination duration, the power supply 1606 output voltage, or a combination of the two.
  • the microprocessor's ability to control the combination of the two permits a wider range of brightness outputs than if only one parameter were controllable.
  • the microprocessors ability to control both illumination duration and power supply 1606 output voltage is advantageous for different lens zoom conditions. When magnification is low, such as when the lens is zoomed out, a relatively small amount of light is required. When magnification is high, a relatively large amount of light is required to capture an image. Use of filters and varying apertures may also be used to adjust the amount of light from the light source. Operation
  • the imaging system 200 includes software modules that control and direct the lens assembly 230 to perform the following functions.
  • the imaging system 200 is configured to automatically control the brightness of the image. For example, after the camera 214 captures an image of a well of the sample plate 212, the software determines whether the brightness is within predetermined thresholds. If the brightness does not fall within the thresholds, the controllers and logic of the imaging system 200 iteratively adjust the illumination intensity of the illuminators 1402 to adjust the brightness of the images until the brightness falls within the thresholds, i some embodiments, the brightness of the image may be evaluated based on a predetermined region (or set of pixels) of the image captured.
  • the brightness of the illuminators 1402 may be adjusted when capturing a plurality of images of the same sample droplet.
  • the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212.
  • the imaging system 150 captures one image with the light source 216 lighting the sample with a first brightness level.
  • the imaging system 150 captures a second image with the light source 216 lighting the sample with a second brightness level.
  • the controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data, which may result in enhanced characterization of the sample, i some embodiments, the brightness used for the second image may be logically controlled based on analyzing the brightness of the first image, determining if a lighter or darker second image may result in enhanced characterization of the sample, and adjusting the light source 216 to light the sample accordingly.
  • the imaging system 200 can also be configured with software to automatically focus the image.
  • An exemplary autofocus routine is as follows. Once the lens assembly 230 is positioned over a sample of the sample plate 210, the objective lens 231 is moved along its imaging axis to a predeteimined starting position. The camera 214 then acquires an image of the sample and/or well at that focus position. In one embodiment, the software obtains a "focus score.” This may be done, for example, by examining the brightness values of a set of pixels (e.g., a 500x3 pixel area) in the captured image, applying a low pass filter, and computing the sum of the squares of the differences in brightness of adjacent pixels for the set of pixels.
  • a set of pixels e.g., a 500x3 pixel area
  • the position and focus score data points are stored in an array.
  • the objective lens 231 is moved to the next predetermined incremental position on its imaging axis, and the process of acquiring an image, computing the focus score, and storing the position and focus score values is repeated. This process continues until the objective lens 231 has been moved to all the predetermined or desired positions, e.g., until it reaches a predetermined end position by incrementally moving in a predetermined step size from the starting position.
  • the step size depends at least in part upon a predetermined maximum number of images to be acquired during the autofocus routine.
  • the software searches the lens position/focus score array to identify the lens position with the best focus score.
  • the software then proceeds to compute the lens positions that are midway from the best focus score position to positions adjacent to it in the array. That is, the software examines the array of positions already imaged, finds the nearest position greater than the lens position associated with the best focus score, and calculates a "midpoint" position between them. A similar process is performed with regard to the nearest lens position that is less than the best focus score position.
  • the software acquires images at the midpoint positions and obtains corresponding focus scores.
  • the software once again evaluates the array to identify the image with the best focus score, using a step size that is, say, one-half of the initial step size. These tasks are repeated until, for example, a maximum number of images acquired during autofocus, or a minimum step size, has been reached.
  • the imaging system 200 performs the processes of autofocusing and automatically adjusting the brightness, as described above, for each sample of a sample plate 212 received by the imaging system 200. After the desired brightness and focus are set, the imaging system 200 then captures an image and stores it in, for example, the data storage 190. In one embodiment, the automatically determined brightness and focus are also stored for each sample. In another embodiment, the software of the imaging system 200 calculates and stores a value associated with the mean of the brightness and focus positions for the aggregate of samples of the first plate. This value is then associated with each of the position/focus score data points in the array. Subsequent plates are examined using the mean brightness and focus as initial imaging values.
  • the imaging system 200 may also include additional functionality related to automatically finding the edges of a droplet in a well of a sample plate 212. hi one embodiment, after the edges of the drop have been found, the imaging system 200 finds the centroid of the droplet and moves the lens assembly 230 to the centroid. The imaging system 200 then determines the magnification required to image substantially only that area corresponding to the droplet, adjusts the zoom, and acquires the image.
  • the imaging system 200 may be configured to perform automatic adjustment of aperture.
  • the imaging system 200 receives settings for either maximum image resolution or maximum depth of field.
  • the imaging system 200 determines the corresponding aperture by, for example, looking one or more tables having values correlating aperture with maximum resolution and/or maximum depth of field.
  • magnification data may be part of these tables.
  • the imaging system 200 may be configured to perform automatic zoom of a substance in a sample stored in a well of the sample plate 212.
  • the imaging system identifies a "crystal-like object" in the sample, calculates its centroid, moves the lens assembly 230 and digital camera 214 to the centroid, adjusts the zoom level, and captures an image of the "crystal-like object.”
  • the imaging system 200 can be configured to capture an image of a sample or a crystal-like object, perform image analysis of the image, adjust imaging parameters (e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.) and retake an image of the sample or crystal-like object.
  • imaging parameters e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.
  • the imaging system 200 can perform this process iteratively until predetermined thresholds (e.g., contrast, edge detection, etc.) are met.
  • predetermined thresholds e.g., contrast, edge detection, etc.
  • the images captured in an iterative process can be either analyzed individually, or can be combined with other images and the resulting image analyzed.
  • the imaging system 200 receives a sample plate 212 and for each sample performs the following functions including, automatic adjustment of brightness and aperture, autofocus, automatic detection of the sample droplet, and acquisition and storage of images.
  • the imaging system 200 stores the aperture, brightness, focus position, drop position and/or size. The imaging system 200 may then use mean values of these factors as initial imaging settings for subsequent plates.
  • an illumination source filter 270 ( Figure 2) may be inserted in the filter slot 272 so that the filter 270 is interposed between the light source 216 and the sample plate 212. .
  • the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed or exchanged by the imaging system 200.
  • an image filter (such as those that may be placed in the photo-filter carriage 237) may be interposed between the sample droplet in the sample plate 212 and the objective lens 231.
  • the image filter includes a polarization filter that provides a variable amount of polarization on the light incident on the objective lens 231. The use of these filters can be automatically controlled by imaging software routines and/or determined by operator defined variables.
  • the motorized control of aperture, focus, and zoom of the lens assembly 230 in conjunction with remote control of the light source 216 allows dynamic optimization of contrast, field of view, depth of field, and resolution.
  • Figure 17 depicts a functional block diagram of an automated sample analysis system 1700 having an imaging system 100, 150, or 200.
  • the system 1700 includes controllers and logic 1760 for controlling various subsystems housed in a cabinet 1702.
  • the system 1700 can further include a shelf access door 1712 for allowing access to a removable shelf system 1720 and/or a stationary shelf system 1722.
  • a removable shelf access door 1710 can be provided.
  • the system 1700 can include a transport assembly 1730 that can consist of a plate handler 1732, an elevator assembly 1734, and a rotatable platform 1736.
  • the system 1700 can further include an environmental control subsystem 1765 that employs a refrigeration unit 1762 and/or a heater 1764.
  • the system 1700 also includes an imaging system 200 as has been described above.
  • the imaging system 200 having subcomponents 210, 214, 216, 218, 220, and 230, which are fully detailed above with reference to Figures 2-16, can be housed in the cabinet 1702. This arrangement ensures that the samples in the sample plates remain at all times within the confines of a controlled environment. That is, once a sample plate is stored in the cabinet 1702, it is unnecessary to expose the sample plate to the environment external to the cabinet since the system 1700 is capable of automatically (i.e., without operator intervention) carry out the imaging of the sample within the cabinet 1702.
  • FIG. 18 depicts a block diagram of an imaging and analysis system 1800, according to one embodiment of the invention.
  • the imaging system 1805 can be an imaging system 100, 150, or 200 as described above, or another suitable imaging system that provides similar functionality to the imaging systems described herein.
  • the system 1800 includes an imaging system controller 1820 that provides logical control of the imaging system 1805 to, for example, direct the imaging system 1805 to image a particular sample on a particular sample plate 212, all the samples on the sample plate 212, or image a subset of the samples.
  • the imaging controller 1820 may also control the imaging parameters used by the imaging system 1805.
  • imaging parameters can include, for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering and brightness.
  • the system 1800 also includes an image storage device 1810 that stores images of samples captured by the imaging system 1805.
  • the image storage device 1810 can be any suitable computer accessible storage medium capable of storing digital images, e.g., a random access memory (RAM), hard disk floppy disk, optical disk, compact disks, or magnetic tape.
  • RAM random access memory
  • the system 1800 shows the image storage device 1810 separate from the imaging system 1805.
  • the image storage device 1810 can be included in the imaging system 1805, or it may be included in a system that may also include an image analyzer 1815, the imaging system controller 1820, or a scheduler 1825.
  • a computer includes all the control, scheduling, analysis and imaging software for the system 1800.
  • the software for the system 1800 may reside and run on a plurality of computers that are in communication with each other.
  • the imaging system 1805 may be configured to provide captured images directly to the image analyzer 1815, or it may be configured to typically store images on the image storage device 1810 and provide images to the image analyzer 1815 as directed by the imaging system controller 1820.
  • the scheduler 1825 communicates with the image analyzer 1815 and the imaging system controller 1820 to control the analysis and imaging of samples based on user provided input.
  • the scheduler can schedule the imaging of a particular droplet or a plurality of droplets on a sample plate, and coordinate the imaging of said droplet or plurality of droplets with its subsequent analysis.
  • the scheduler 1825 can use a database 1830 to store information relating to scheduling the images and image specific information, for example, the size of pixels in each of the stored images, in a suitable format for quick retrieval. Knowing the pixel size can allow the analyzer 1815 to reduce sampling to an appropriate density and size for particular objects in the image.
  • the information in the database 1830 can be available with each request to process an image.
  • the database 1830 can reside on the same computer as the scheduler 1825 or on a separate computing device.
  • the scheduler 1825 provides an analysis request to the image analyzer 1815.
  • the analysis request includes an image list, including the resolution of each image and the absolute X,Y location of its center.
  • the image list typically contains only one image but may contain a plurality of images.
  • the analysis request can also contain an analysis method including a list of parameters that specify options controlling how to analyze the image(s) and what to report.
  • the analysis request can include the Uniform Resource Locator ("URL") of a definition file 1835, i.e., an electronic address that may be on the Internet, such as an ftp site, gopher server, or Web page.
  • URL Uniform Resource Locator
  • the definition file 1835 defines parameters used by the image analyzer 1815, e.g., neural network dimensions, weights and training resolution (e.g., pixel granularity, or the spacing between pixels, of images used to train the neural network).
  • the definition file 1835 may be a single file or a plurality of files, but will be referred to hereinafter in the singular.
  • the image analyzer 1815 also receives an analysis method file(s) 1840.
  • the analysis method file may be a single file or a plurality of files, but will be referred to hereinafter in the singular.
  • the analysis method file 1840 includes parameters that can be used by the various image analysis modules contained in the image analyzer 1815, e.g., a content analysis module 1930, a notable regions module 1935, and a crystal object analysis module 1940 ( Figure 19), described below, according to one embodiment.
  • the image analyzer 1815 can also include functionality that determines the content of an image in terms of objects and/or regions of, for example, crystals or precipitate, or clear regions, that is, regions that do not show any features.
  • the image analyzer 1835 includes a neural network to identify features, e.g., crystals, precipitate, and edges, that are depicted in the image, according to one embodiment.
  • the image analyzer 1815 is configured to identify objects and regions of interest in an image quickly enough to allow the system 1800 to re-image specific objects or regions, if desired, while the corresponding sample plate is still in the imaging system 1805.
  • the image analyzer 1815 provides an analysis response to the scheduler 1825.
  • the analysis response typically includes the parameters used to for the analysis and the results of the particular analysis performed, e.g., the count of crystal, precipitate, clear and edge samples, regions of crystals, and/or a list and description of objects found in the image.
  • the analysis results can be reviewed using an output display 1845 that can be co-located with the scheduler or at a remote location.
  • the output displays may be coupled to the system 1800 via a web server, or via a LAN or other small network topology.
  • Embodiments of a remote output display in accordance with the invention are described in related United States Provisional Patent Application entitled "REMOTE CONTROL OF AUTOMATED LABS,” having Application No. 60/444,585.
  • Figure 19 depicts a computer 1900 that includes a processor 1905 in communication with memory 1910, e.g., a hard disk and/or random access memory (RAM).
  • the processor 1905 is also in communication with an image analysis module 1960 that can include various modules configured to perform the functionality of the image analyzer 1815 ( Figure 18) described herein.
  • the computer 1900 may contain conventional computer electronics that are not shown, including a communications bus, a power supply, data storage devices, and various interfaces and drive electronics. Although not shown in Figure 19, it is contemplated that in some embodiments, the computer 1900 may include a video display (e.g., monitor), a keyboard, a mouse, loudspeakers or a microphone, a printer, devices allowing the use of removable media including, but not limited to, magnetic tapes and magnetic and optical disks, and interface devices that allow the computer 1900 to communicate with another computer, including but not limited to a computer network, a LAN, an intranet, or a WAN, e.g., the Internet.
  • a computer network e.g., a LAN, an intranet, or a WAN, e.g., the Internet.
  • the computer 1900 is in communication with an imaging storage device, for example, image storage device 1810 ( Figure 18), and is configured to receive an image of a sample from the storage device and determine the contents of the sample, using one or more analysis processes.
  • the computer 1900 can be co-located with the image storage device, located near the image storage device, e.g., in the same building, or geographically separated from the image storage device.
  • the computer 1900 can receive the image from the image storage device via, e.g., a direct electronic connection or through a network connection, including a local area network, or a wide area network, including the Internet. It is also contemplated the computer 1900 can receive the image via a suitable type of removable media, e.g., a 3.5" floppy disk, compact disc, ZIP drive, magnetic tape, etc.
  • the computer 1900 can be implemented with a wide range of computer platforms using conventional general purpose single chip or multichip microprocessors, digital signal processors, embedded microprocessors, microcontrollers and the like.
  • the computer 1900 can operate independently, or as part of a computing system.
  • the computer 1900 may include stand-alone computers as well personal computers, workstations, servers, clients, minicomputers, main-frame computers, laptop computers, or a network of individual computers.
  • the configuration of the computer 1900 may be based, for example, on Intel Corporation's family of microprocessors, such as the PENTIUM family and Microsoft Corporation's WINDOWS operating systems such as WINDOWS NT, WINDOWS 2000, or WINDOWS XP.
  • the computer 1900 includes one or more modules or subsystems that incorporate the analysis processes described herein.
  • each module can be implemented in hardware or software, or a combination thereof, and comprise various subroutines, procedures, definitional statements, and macros that perform certain tasks.
  • all the modules are typically separately compiled and linked into a single executable program.
  • the processes performed by each module may be arbitrarily redistributed to one of the other modules, combined together with other processes in a single module, or made available in, for example, a shareable dynamic link library.
  • a module may be configured to reside on the addressable storage medium and configured to execute on one or more processors.
  • a module may include, by way of example, other subsystems, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. It is also contemplated that the computer 1900 may be implemented with a wide range of operating systems such as Unix, Linux, Microsoft DOS, Macintosh OS, OS/2 and the like.
  • the analysis module 1960 can include a pre-processing module 1925 that can filter the received image prior to further processing.
  • the image may be filtered to remove "noise” such as speckles, high frequency noise or low frequency noise that may have been introduced by any of the preceding steps including the imaging step.
  • Filtering methods to remove high frequency or low frequency noise are well known in image processing, and many different methods may be used to achieve suitable results. For example, according to one embodiment in a filtering procedure that removes speckle, for each pixel, the mean and standard deviation of every other pixel along the perimeter of a 5x5 pixel area centered on a pixel are computed. If the center pixel varies by more than a threshold multiplied by the standard deviation, then it is replaced by the mean value. Then the slope of the 5x5 image pixel intensities is calculated and the center pixel is replaced by the mean value of pixels interpolated on a line across the calculated slope.
  • the analysis module 1960 also includes one or more modules that perform image analysis to determine information about the sample contents, including content analysis module 1930, notable regions analysis module 1935, and crystal object analysis module 1940.
  • the content analysis module 1930 determines the count of crystal, precipitate, clear and edge pixels in the image, and can be optionally enabled to operate only inside a specific region of the sample.
  • the notable regions analysis module 1935 determines a list of regions of a specified pixel type, e.g., crystal, precipitate, clear and edge pixels.
  • the crystal object analysis module 1940 dete ⁇ riines objects containing crystal pixels that meet certain criteria, for example, size, area, or density.
  • FIG 19 also shows analysis module 1960 includes a report inner/outer non- clear ratio module 1945 that determines the ratio of non-clear pixel density inside a sample region over non-clear pixel density outside a sample region.
  • the analysis module 1960 also includes a graphical output analysis module 1950 that generate a color-coded image depicting each of the various features found in a sample image in a specified color. These modules are further described hereinbelow.
  • Other analysis modules 1955 that incorporate different image analysis processes may also be included in the analysis module 1960.
  • an analysis module 1955 can analyze the change in two or more images of the same sample taken at two different times.
  • the analysis module 1955 can receive the count of pixels that are classified as crystal, precipitate, clear or edge pixels in an image of a particular region of a sample at a time TI and save the count information with a reference to the region of a sample imaged. When the same region of a sample is re-imaged at a later time T2, the analysis module 1955 receives the count of pixels that are classified as crystal, precipitate, clear and edge pixels in the image of the sample region at time T2. The analysis module 1955 can compare the count information from time TI and T2 to dete ⁇ riine if the droplet contains a crystal(s). One analysis method compares the total number of pixels classified as crystal pixels at time TI and T2 to determine if the sample contains crystal.
  • Another comparison method compares the percentage of crystal pixels at time TI to the percentage of crystal pixels at time T2. If the count or the percentage of crystal pixels increases beyond a threshold value, the sample will be deemed to contain crystals.
  • the other pixel classifications e.g., precipitate, clear and edge
  • a time-based comparison method where the count information is saved for one image and compared to a second subsequent image, can be used with any sample processing algorithm.
  • the analysis module 1955 may analyze a series of two or more images crystal growth using a grid approach, hi this analysis method, two images II and 12 are divided up into grids, and the corresponding grids in each image are compared for change in the number of crystal, using, for example, the actual number of pixels or the percentage of crystal pixels.
  • the pixel count information can be kept for each image and used to compare to other images taken at a different time.
  • the method can include analyzing every pixel, or skipping one or more pixels between the pixels analyzed.
  • a scheduler module 1915 and an imaging system controller module 1920 are also included in computer 1900, according to one embodiment. These modules are configured to include functionality that schedules the imaging of sample plates/droplet samples and subsequent analysis of the images, and controls the imaging system 100, 150, 200, 1805, as described herein, e.g., for scheduler 1825 and imaging system controller 1820, respectively.
  • the image analysis software package may include support software that performs training and configuring of perception and analysis functionality, e.g., for a neural network.
  • Some of the algorithms included in the image analysis software modules may use stochastic processing and may include the use of a pseudo-random number generation to find answers. All such functions can be provided a random number generator seed in request parameters received by the software module.
  • the image analysis modules can be configured so that an analysis method using a pseudo-random number does not affect the results of a different analysis method or software module.
  • the image analysis software works with an image size of, for example, 800 by 600 pixels, a zoomed-in resolution of 2,046 pixels/mm (0.5 ⁇ m/pixel), and a zoomed-out resolution of 186 pixels/mm, (5.4 ⁇ m/pixel), or 1,024 by 1,024 pixels, a zoomed-in resolution of 2,460 pixels/mm (0.41 ⁇ m/pixel), and a zoomed-out resolution of 220 pixels/mm, (4.5 ⁇ m/pixel).
  • the image analysis modules may optionally use the same neural network for both zoomed-in and zoomed-out images, however, quality of the results may suffer if only one neural network is used and it may be advantageous to train multiple neural networks, e.g., one for zoomed-in images and one for zoomed-out images.
  • the image analysis software can also be adaptable to other image sizes and pixel resolutions, however, the training of new neural networks may be necessary in order to suitably process these images. If the resolution of the images vary, each definition file may include its training resolution, that is, the spacing between sampled pixels that was used to train the neural network. This information allows the algorithms to consider how to adapt images of varying resolution for use with the neural networks.
  • the analysis module receives an analysis request ( Figure 18) containing an image list that includes the images to be analyzed.
  • the analysis request also includes, for each image, its resolution in pixels/mm and the absolute X-Y location of the center of the image. Typically, there is only one image in the image list, however, multi-image methods may also be used.
  • the analysis request also includes an analysis method, which is a collection of parameters that specify options controlling how to analyze the images and what to report. In specifying the analysis method, a URL of the definition file is included.
  • the definition file defines the neural network's dimensions, weights and training resolution, i.e., a pixel granularity of the images that were used to train the neural network. Examples of the parameters are first described generally below, and then specifically as they relate to the content analysis module 1930, notable regions analysis module 1935, and the crystal object analysis module 1940, according to one embodiment.
  • the analysis request may include parameters that specify how a working copy of the image is prepared for all subsequent processing.
  • parameters can include options for a color to grayscale conversion of the image, and resizing of the image using pixel interpolation methods.
  • the parameters may specify the output of an image, for example, they may specify whether and how an image file representing the pixel interpretation should be generated. This generated image file may be visually displayed and further evaluated by a user.
  • the parameters are also used by the analysis modules, e.g., in the content analysis module, the parameters specify whether an image is scanned and analyzed to determine statistics of its contents in terms of crystal, precipitate, clear and edge features. These parameters specify whether crystallike objects should be searched for and reported. Options may include a scan grid, an ID criteria and the maximum number of objects to find.
  • the parameters may also be used by the notable region analysis module 1935 to specify whether notable regions in an image should be reported and, if so, the scan grid in micrometers, the size that is the width times the height in micrometers, the ID criteria, and the quantity of regions to report.
  • the crystal object analysis module 1940 can use the parameters to specify whether effective contiguous subregions of crystals are identified and reported as crystal objects, how this identification should be performed, and the quantity of crystal objects to identify.
  • the parameters can also specify whether to report the inner/outer non-clear ratio. If this ratio is to be specified, the output includes a ratio of the non-clear pixel density inside a sample region over the non-clear pixel density outside of the sample region.
  • the ratio would be 3.0 if every 100th pixel inside of a sample region is non-clear and every 300th pixel outside of a sample region is non-clear. According to one embodiment, ratios above 1 billion are truncated to that value.
  • Image sampling parameters may include, for example, a color processing parameter which specifies how each pixel is converted to a floating point intensity value, or it may specify the linear grayscale for image conversion. If the image is already grayscale, pixels are converted to black, e.g., 0.0, or to white, e.g., 1.0. If color is selected, the pixels are linearly converted to 0.0 to 1.0 with equal channel weighting for each color.
  • Pixel interpolation parameters may include, for example, no pixel interpolation, that is, only a closest pixel method will be used for pixel interpolation. This is generally the fastest interpolation method but typically results in reduced image quality.
  • Interpolation methods that may be selected include bilinear and cubic spline interpolation, which yield higher quality images but they are more computationally complex and take more time or resources to generate.
  • the re-size parameter includes options of 1:1, that is, the image is not resized, automatic, where the image is resized to match the training resolution using the specified interpolation method, and scale factor, where the image is re-sized using this factor and specified interpolation method.
  • the analysis modules 1930, 1035, 1940 are configured to receive an analysis request from a scheduler module 1915 and generate a response, as described below.
  • the content analysis module 1930 determines counts of types of pixels in the sample images, e.g., crystal, precipitate, clear and edge pixels, as depicted in the image, hi the illustrative embodiment described herein, the content analysis module 1930 is implemented as a neural network.
  • the content analysis module 1930 receives a set of parameters that include parameters that indicate whether this module should be enabled, whether the content analysis should take place inside the sample region only or inside and outside the sample region, and the number of pixels to be skipped during the image analysis. If enable is set to NO, no analysis by the content analysis module 1930 is done and nothing is reported. If enable is set to YES, then the content analysis module analyzes the sample image. If the inside-sample-region-only option is set to YES, the edge of the sample region is found first, and the analysis is done only within the sample region edge. If inside-sample-region-only is set to NO, then checking is done inside and outside the sample region.
  • a process for identifying the edge of a sample region is described hereinbelow in reference to Figure 20, according to one embodiment. If the number of pixels to be skipped is set to 0, all the pixels in the image will be used. If the number of pixels to be skipped is set to 1 , every other pixel in the image will be used for the content analysis, if 2, every third pixel will be used, etc. The default parameter for skipped pixels is typically set to 0.
  • the response of the content analysis module 1930 includes an "echo" of the parameters used during the content analysis processing, and the counts of each pixel type pixels of crystal, precipitate, clear and edge samples found in the image. If the inside-sample-region-only option is enabled, the edge count can be used to assess how well the edge of the sample region was found. If it is not enabled, the edge count may be ignored.
  • the notable region analysis module 1935 processes an image and determines regions of a specified size that include the minimum levels of crystal, precipitate or non-clear pixels.
  • the request parameters for the notable region analysis module 1935 can include an enable parameter which is set to either "YES” or "NO” that determines if notable region analysis should be performed and reported.
  • the request parameters can also include a region size or area that is used to determine the size of the smallest region the notable region analysis module will identify.
  • a skip-pixel parameter can be included to control the number of pixels that will be skipped during processing, where "0” means to check all of the pixels, "1" means to sample every other pixel, that is, sample the pixels with one unsampled pixel between them, etc. Typically, the default value for skip-pixel is "0.”
  • the request parameters can also include the maximum number of regions to report and the minimum percentages of crystal pixels, precipitate pixels and non-clear pixels to report. Typically, pixels determined to be edge-type pixels are ignored.
  • the notable region analysis module 1935 can be configured to identify regions with the highest percentage of each specified pixel type. If the regions contain less than the minimum percentage of pixels, it is not saved and the search for regions ends. Regions typically do not go outside of the input image. Newly found regions generally do not overlap existing regions.
  • the report of results from the notable regions analysis module includes all the request parameters and a list of the regions identified The results for each region can include its absolute position, size, the number of crystal pixels and the total pixels sampled, not including edge pixels.
  • the crystal object analysis module 1940 identifies small regions in the image that are rich in crystal pixels.
  • the small regions, or objects, comprise one or more "cells.”
  • the request parameters for the crystal object analysis module can include an enablement parameter which determines if this analysis should be performed and reported.
  • the request parameters also include a skip-pixels parameter that operates as previously described above, parameters that control the size of the cells identified, for example, a cell-minimum-size parameter to control the smallest width or height of a cell, a cell minimum area which indicates the smallest overall area of a cell, a cell minimum density parameter which indicates the proportion from 0 to 1 of crystal pixels the cell must contain in order to be reported, and an object- inimum-size parameter which indicates one or more dimensions that the overall object must achieve in order to be reported.
  • the request parameters can also include a pseudo-random generator seed which is used for the crystal object analysis stochastic processing.
  • the crystal object analysis module 1940 typically includes the limitation that the center of a cell cannot be inside another cell.
  • Identified cells that touch are grouped and identified as a single crystal object, and the largest overall dimension of the crystal object is computed. If the largest overall dimension is less than the minimum size, the object is discarded.
  • the crystal object analysis processing can also compute an object area as the sum of the cell density times the cell area, and further compute the object centroid.
  • the results from the crystal object analysis module 1940 can include all the request parameters provided to the module, a list of objects identified and their description. The list is sorted in descending order by an object's area. Each object description includes the object area ( ⁇ m 2 ), the centroid (X, Y in ⁇ m) and a list of cells that make up each object. Each cell is described with its absolute position in size ( ⁇ m), crystal pixel count and total pixel count.
  • the graphical output module 1950 generates a representation of the analyzed image which can be displayed and further analyzed. For example, grayscale and/or color coding pixel characteristics may be adjusted by the graphical output module 1950.
  • the analysis request for the graphical output module 1950 includes an image path parameter that defines where the image to be analyzed is found. If the image path parameter is empty, no further processing is done.
  • a base value parameter indicates whether a "base image," i.e., an image used to generate the representation of the analyzed image, is either black, gray or white. If the base value is gray, the base image begins as a grayscale rendition of the resampled image. Otherwise, the base image begins as a white or black image, as indicated by the base value.
  • the parameters include a gray "min” value and a gray “max” value, which are typically from 0 to 1, and specify the linear grayscale compression.
  • adjusting the gray min or max values can control the color coding contrast or flatten the image, and they are typically set to defaults of 0 for the gray min and .75 for the gray max.
  • An opaque parameter indicates whether a pixel in the base image should be replaced with the color coding associated with the particular type of corresponding pixel in the analyzed image. For example, if the opaque parameter is set to YES or the base parameter equals black or white, the appropriate color coding replaces the pixel. If the opaque parameter is set to no, the color for a base image pixel is generated by OR'ing the color with the corresponding pixel in the analyzed image.
  • a crystal color parameter provided in the analysis request sets the color coding value for pixels identified as crystals, a precipitate color parameter sets the color coding for precipitate pixels, and an edge color sets the color coding for pixels identified as edges.
  • the default values for the crystal color parameter may be blue, the precipitate color parameter may be green and the edge color parameter may be red.
  • the graphical output module 1950 writes the color coded image file to the image path specified in the request parameters, unless the path parameter is empty or invalid.
  • the generated color-coded image file typically does not contain region annotations, but annotations can be superimposed on the image file by another process, if desired.
  • the graphical output module 1950 provides an analysis report to the scheduler module 1915 that includes the request parameters that were used to produce the color coded image file.
  • the analysis modules 1930, 1935, 1940 can function as service functions that are capable of quickly identifying objects and/or regions within an image, so that a scheduler module 1915 can dispatch control information to the imaging controller module 1920, which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device.
  • the imaging controller module 1920 which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device.
  • the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be
  • an analysis module 1960 can analyze at least 10,000 images per day under typical conditions, where the images are less than or equal to 1.0 mega pixels, i.e., the equivalent of processing each image in 8.64 seconds, and where one instance of the image analysis software is running on one PC.
  • the analysis module may be packaged and distributed in a Java 2 file. Java message service may be used to receive requests and send the responses from the analysis module(s). Extensible markup language (XML) may also be used for the analysis requests and responses.
  • XML Extensible markup language
  • Test images are used with training software to train the neural networks to analyze crystal growth in sample droplets.
  • Training software allows the user to create, open, display, edit and save lists of images in training/test set files, and is described herein according to one embodiment of the invention.
  • the test images include identified subimages containing edge, crystal, precipitate and clear pixels within a wide variety of images. For each image, the user can designate "training subimages" as crystal, precipitate, edge or clear. The resolution of the subimages can be user-adjustable.
  • the software can include a single-click designation action that efficiently designates the subimages as crystal, precipitate, edge or clear.
  • the images containing the designated training subimages can be saved as a set of training files.
  • the training software can display training subimages in table form and/or as color-coded markers on an image. Subimages may be moved by either dragging the marker or editing the table. Subimages may also be deleted either from the image or from the table.
  • the training software can be configured to allow a user to define the neural network dimensionality, select a training set file and another file for testing, and perform iterative training and testing using the selected sets of files. Training data, e.g., neural network weights, training and test error, and the number of iterations is saved in a definition file.
  • the intensity levels of pixels in a selected image area are provided as an input to the neural network.
  • the neural network identifies each pixel as a particular type of pixel, e.g., edge, clear, crystal or clear.
  • the results are compared to what is actually correct, and corresponding error values are calculated. Small adjustments are made to the weights within the neural network based on the error values, and then another test image containing a designated subimage is provided as in input to the neural network. This process is performed for other test images and can be repeated for many thousands of iterations, where each time the weights may be slightly adjusted to provide a more accurate output.
  • an image of a sample droplet is provided as an input to the neural network.
  • the output of the neural network includes a rating for each pixel that indicates a degree of confidence that the pixel depicts each of the different pixel classifications, for example, edge, crystal, precipitate, and clear.
  • the rating is typically between zero and one, where zero indicates the lowest degree of confidence and a 1 indicates the highest degree of confidence.
  • the overall content of an image can be determined counting the number of pixels of each classification by computed as a percentage of the crystal, precipitate, edge and clear pixels contained in the image.
  • one analysis option identifies edges of a drop within the image, and may be used with quick and coarse resolution search parameters to first identify the edge of the drop, and then the interior of the drop may be analyzed with a higher resolution search.
  • a supervised learning type of neural network is used to classify the subimages as crystal, precipitate, edge of drop or clear, using the pixel intensity, not the pixel hue.
  • the entire image is scanned, sampling subimages on a host-specified grid, where the spacing of the grid is in millimeters, not pixels. The resolution of the images is provided as a parameter received from the host.
  • Pie charts can be generated graphically showing the results of the neural network analysis.
  • Each image analysis method file contains neural network definitions, e.g., "dimensions” and "weights.”
  • the method file also includes parameters that specify the analysis options including whether to perform drop edge detection, and if drop edge detection is selected, the sample grid spacing used to find the edges of the drop, and the sample grid spacing to find crystals within the drop. For example, drop edge detection finds the edge of a drop quickly with a relatively coarse grid spacing scan and then use a relatively fine grid spacing scan inside the drop, according to one embodiment.
  • a database can be used to associate the image analysis file with the image analysis results, so that if a better image analysis method is available at a later time, an image may be re-analyzed using the later analysis method.
  • the analysis modules can use a neural network to classify the contents of an image.
  • a fast operator can be used to identify if a pixel has a particular crystal characteristic.
  • One embodiment of an edge detection process is described below and illustrated in Figure 20A. Color or black and white images of a sample droplet can be generated and used for identifying crystals.
  • the edge detection process 2000 receives the image of a sample that may contain crystals.
  • the process 2000 determines if the image received is a color image. If the image is a color image, it is converted to a grayscale image at step 2015. The image may be filtered at step 2020 to remove minimize undesirable characteristics such as speckle or other types of image "noise" during subsequent processing.
  • the edge detection process 2000 uses the gradient of the intensity of the pixels in the image to identify edges.
  • gradient information is calculated from a 3x3 set of pixels using a calculation based on the best fit of a plane tlirough the image points.
  • the gradient of intensity of the pixel in the center of the 3x3 set of pixels is the direction and magnitude of the maximum slope of the plane.
  • the use of a 3x3 set of pixels helps to eliminate some of the effects of image noise on the process.
  • Gradient information is calculated for selected pixels in the image. All the pixels in the image may be selected, or a subset of the pixels, e.g., an area of interest in the image which may be smaller than the whole image, may be selected.
  • Gradient information is calculated for each selected pixel and stored in three arrays of the same dimensions as the received image.
  • the first array contains the cosine of the angle of the gradient direction.
  • the second array contains the sine of the angle of the gradient direction.
  • the third array contains the magnitude, or steepness, of the gradient. Pixels with a calculated magnitude less than a given threshold have their gradient information set to zero so they are eliminated from further processing.
  • edge pixels are identified using the gradient information.
  • An edge pixel can be defined as a pixel for which the magnitude of the gradient of the image is a local maximum in the direction of the gradient. These pixels represent the points at which the rate of change in intensity is the greatest.
  • a separate array of pixels is used (of the same dimensions as the original image) to store this information for further processing.
  • edge pixels are formed into groups based on the direction of their gradient. A threshold on the difference in direction is used to include or exclude pixels from a group. Each pixel in a group should be adjacent to another pixel in the group. The edge pixels are labeled identifying the group to which they belong.
  • the group(s) with crystal characteristics are selected and at step 2045 the selected groups are provided to another analysis process for aid in further analysis of the image.
  • Figure 20B includes the same steps 2005 - 2035 as in Figure 20A, and then uses the crystal characteristic "straightness" to determine whether a group of pixels depict a crystal.
  • edge pixels are formed into a group(s), as described above for Figure 20A.
  • the edge detection process 2000 determines the "straightness" of each labeled group of pixels using linear regression, according to one embodiment. The correlation from the linear regression and the number of pixels in the group is used to determine the "straightness" of the group.
  • the straightness can be defined as the product of the count of pixels in the group and the reciprocal of 1.0 minus the fourth power of the con-elation coefficient for the group, according to one embodiment. If the count of pixels is below a given threshold, the count is set to zero.
  • the edge detection process 2000 generates an image, hereinafter referred to as a "lines image,” using the previously calculated straightness information.
  • the lines image is the same shape and size as the subset of pixels selected for edge detection.
  • the intensity value for a pixel in the lines image is set to the straightness value of the group that its corresponding pixel belongs to.
  • the lines image containing information indicating where "straight" pixels may be found, is provided to an analysis module to aid in crystal identification.
  • the scheduler 1825 controls the imaging of samples by communicating to the imaging system controller 1815 the necessary information for imaging a particular plate and the droplet samples on that plate.
  • the imaging system controller 1815 directs the imaging system 1805 to generate the images of the particular plate and droplet sample at a specified time or in a specified sequence, and the images are stored on the image storage device 1810.
  • the scheduler 1825 sends an analysis request to the image analyzer 1815, and the corresponding image for that sample is provided to the image analyzer 1815.
  • the image analyzer 1815 determines the contents of the image using one or more of the various analysis modules, and provides results to the scheduler 1825 in an analysis response.
  • Figure 21 shows a process 2100 that uses the results of analyzing an image for subsequent imaging of the same sample, according to one embodiment of the invention.
  • a first image of a sample is generated using a first set of imaging parameters, which may include for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering, and/or brightness.
  • An analysis process receives the first image at step 2110 and analyzes the first image in accordance with the analysis request at step 2115.
  • the process 2100 determines whether crystal formation in the first image is suspected, the presence of which can make an additional image of the sample desirable. For example, to deteimine if an additional image is desired, a score can be computed for the image.
  • the score can be based upon user- adjustable thresholds and weighting factors, allowing the user to tailor preferences with experienced personal judgment. If the overall score exceeds a specific threshold, reimaging is warranted and an appropriate reimaging request is dispatched. Scoring and threshold may be a function of apparent image content and/or also a function of system bandwidth and scheduling issues. The more available system resources are, e.g., the imaging subsystem, the more likely zoomed-in reimaging occurs.
  • the analysis of the first image at step 2120 can be done using a relatively fast running process, e.g., determining the inner/outer non-clear ratio for the droplet sample, and a further, more thorough analysis can be done at step 2140, according to one embodiment.
  • information is provided to the imaging system that allows the same sample to be re-imaged to create a second image of the sample.
  • Subsequent images generated of the same sample can use imaging parameters that are different than those used to generate the first image, that is, at least one value of a imaging parameter used to generate the second image is different than the values of the imaging parameters used to create the first image.
  • the process 2100 receives the second image of the sample and analyzes the second image at step 2140 using, for example, the analysis methods described herein. Analysis results are output for evaluation or display at step 2145.
  • subsequently generated images can more clearly show the presence of crystal formation. For example, if the formation of crystal in the sample droplet is suspected as a result of analyzing the first image, information can be communicated to the imaging system to zoom-in on the area where the crystal formation is suspected and re-image the droplet using a higher magnification.
  • Other imaging parameters e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, and brightness, can also be changed to obtain an image that may better depict the contents of the sample.
  • Timely analysis of the first image can result in a relatively large time savings if a subsequent image of a particular sample is desired.
  • the process for handling a sample plate containing the sample e.g., fetching the correct plate from a storage location, placing the plate in the imaging device, and returning the plate to its storage location, is very time consuming.
  • minimizing the amount of plate handling during image generation increases image generation and analysis throughput.
  • the images generated from the samples on a sample plate are completely analyzed before the plate is removed from the imaging device. If desired, additional subsequent images of a sample contained on that plate can then be generated without incurring the time required to re-fetch the plate.
  • a certain percentage of the images are analyzed before the plate is removed. While this may not allow every sample to be re-imaged without re-fetching the plate, e.g., the analysis of the last sample imaged may not be completed before the plate is removed, it may still result in an overall time savings as it may allow quick reimaging of most of the samples, if desired, while not unduly delaying the removal of the plate from the imaging device.
  • Figure 22 illustrates a process 2200 that includes generating two images of a sample, where each image is generated using a set of imaging parameters that has at least one different imaging parameter than those used for the other image, according to one embodiment of the invention.
  • a first image is generated using a first set of imaging parameters.
  • the first image is received by an analysis process which determines one or more regions of interest in the first image at step 2215.
  • the analysis process may be, for example, an edge detection process or a process implemented in one of the analysis modules, both or which are described hereinabove.
  • a second image is generated using a second set of imaging parameters where the second set of imaging parameters includes at least one imaging parameter that is different than the first set of imaging parameters.
  • One or more imaging parameter may be changed to generate the second image.
  • the focal plane may be set to a different height relative to the droplet sample
  • the illumination of the sample may be changed, including using a different direction of illumination (e.g., lighting the sample from alternate sides and off- axis lighting) or a different illumination brightness level
  • the magnification or zoom level used may be changed
  • different filtering may be used for each image (e.g., polarizing filters).
  • the second image is received by an analysis process, and analyzed to determine a region or regions of interest at step 2230.
  • the regions of interest from the first and second images are combined to form a composite image.
  • the composite image is the same size as the first and second images.
  • the first and second of images are analyzed to determine the portion or portions of each image that will be used to form the composite image.
  • the composite images is generated by copying the values of the pixels from each region of interest in the first and second images into one composite image.
  • the composite image is analyzed for the presence of crystal formation by a user, or automatically by an automatic or interactive analysis method, e.g., using the content analysis module, the notable regions analysis module, the crystal object analysis module, or a report inner/outer non-clear ratio module, as previously described, and the results are output at step 2245.
  • process 2200 shows a process to form a composite image using two images generated with different imaging parameters
  • more than two images may also be generated and used to form composite images, where each image is generated using at least one different imaging parameter, according to another embodiment.
  • a plurality of images are generated for a sample where the focal plane for each image is set at a different "height" relative to the sample.
  • the resulting images may show varying sharpness in corresponding locations. The sharpness of the corresponding portions of the images are compared to determine which portion of each image should form the composite image.
  • the portion of each image that best satisfies specified sharpness criteria may be selected from the plurality of images to form the composite image.
  • the size of a portion of an image compared to the other images may be as small as a single pixel or several pixels, and may be as large as tens of pixels or hundreds of pixels, or even larger.
  • FIG. 23 illustrates a process 2300 for visual evaluation of crystal growth by a user, according to another embodiment of the invention.
  • process 2300 receives an image of a sample.
  • the process 2300 classifies the pixels of the image according to their depiction of the contents of the sample, e.g., the pixels are classified as depicting crystal, precipitate, clear or an edge.
  • the pixels of the image may be classified by processes incorporated into the content analysis module 1930, the notable regions analysis module 1935, the crystal object analysis module 1940, as described above, or another suitable analysis process.
  • process 2300 generates a second image that is color-coded using the pixel classification information from step 2310.
  • Step 2315 may be performed by the above-described graphical output analysis module 1950.
  • Pixels that were classified as edge, precipitate or crystal pixels are depicted as a particular color, e.g., red for crystal pixels, green for precipitate pixels, and blue for edge pixels.
  • One or all the classified pixels may be depicted according to a color-code scheme.
  • the second image can have opaque color-coded information, or translucent color-coded information that also shows the original image through the color.
  • the second image is typically the same size and shape as the image received at step 2305.
  • the color-coded second image is visually displayed, for example, on a computer monitor or on a printout.
  • the second image is visually analyzed to determine crystal growth information of the droplet sample. Displaying the color-coded image to a user facilitates efficient interpretation of the contents of the image and allows the presence of crystals in the image to be easily and visualized.

Abstract

An image analysis system (200) and related methods for automation of the monitoring of samples to determine crystal growth. Samples are imaged from time to time using a set of imaging parameters. The resulting images are evaluated to determine the contents of the samples. If the evaluation of the image indicates the presence of crystals, the sample may be re-imaged using a different set of imaging parameters, and the resulting image analyzed to determine its contents. The sample may also be evaluated by generating multiple images of a sample using various sets of imaging parameters, identifying pixels that depict regions of interest in a plurality of images, merging the pixels from each region of interest into a composite image and analyzing the composite image. An image of a sample can also be evaluated by classifying the pixels of the image based on the contents of the sample that they depict, color-coding the pixels using the classifications, and displaying the colored image for analysis.

Description

IMAGE ANALYSIS SYSTEM AND METHOD
Background of the Invention Field of the Invention
[0001] This invention generally relates to systems and methods for analyzing and exploiting images. More particularly, the invention relates to systems and methods for identifying and analyzing images of substances in samples. Description of the Related Technology
[0002] X-ray crystallography is used to determine the three-dimensional structure of macromolecules, e.g., proteins, nucleic acids, etc. This technique requires the growth of crystals of the target macromolecule. Typically, crystal growth of macromolecules is dependent on several environmental conditions, e.g., temperature, pH, salt, and ionic strength. Hence, growing crystals of macromolecules requires identifying the specific environmental conditions that will promote crystallization for any given macromolecule. Moreover, it is insufficient to find conditions that result in any type of crystal growth; rather, the objective is to determine those conditions that yield well-diffracting crystals, i.e., crystal configurations that provide the resolution desired to make the data useful.
[0003] Modern chemistry and biology laboratories produce and analyze multiple samples concurrently in order to accelerate the crystal growth development cycle. The samples are often produced and stored in a sample storage container, such as the individual wells in a well plate. Alternatively, drops of multiple samples are placed at discrete locations on a plate, without the need for wells to contain the sample. In either case, hundreds, thousands, or more, different sample drops may be placed on a single analysis plate. Similarly, a single laboratory may house thousands, millions, or more, samples on plates for analysis. Thus, the number of drops to monitor and analyze may be extremely large.
[0004] In the screening experiments, samples under investigation are periodically evaluated to determine if suitable crystallization of the sample has taken place, hi a conventional laboratory, a technician manually locates and removes each plate or sample storage receptacle from a storage location and views each sample well under a microscope to determine if the desired biological changes have occurred. In most cases, the plates are stored in laboratories within a controlled environment. For example, in protein crystallization analysis, samples are often incubated for long periods of time at controlled temperatures to induce production of crystals. Thus, the technician must locate, remove, and view the samples under a microscope in a refrigerated room. Further increasing the demand for technician labor, hundreds or thousands of samples in sample wells may need to be periodically viewed or otherwise analyzed to determine the existence of crystals in a sample well.
[0005] As an alternative, an image may be periodically generated for each sample and provided to a technician, who need not be geographically co-located with the sample, to analyze the image to evaluate crystal growth. Automated image evaluation techniques can also be used to analyze the image and evaluate the presence of crystal growth and increase system throughput. However, current image analysis techniques do not always receive sufficient information from the sample image to accurately evaluate crystal growth. Important information learned as a result of analyzing the image is not automatically exploited, or used for further analysis to facilitate a user's evaluation of the image. Additionally, in current systems, the results of analyzing the image are not adequately provided to facilitate easy interpretation and efficient decision making.
[0006] Accordingly, there is a need in the industry for systems and methods that overcome the aforementioned problems in the current art.
Summary of Certain Inventive Aspects
[0007] This invention relates to systems and methods for automation of the monitoring of samples to determine crystal growth. According to one embodiment, the invention comprises a method of evaluating crystal growth in a crystal growth system, comprising receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, and analyzing information depicted in said second image to determine the contents of said sample. According to other embodiments, the different imaging parameter included in the method can be depth-of-field, illumination brightness level, focus, the area imaged, the center location of the area imaged, illumination source type, magnification, polarization, and/or illumination source position. According to other embodiments of the method of evaluating crystal growth, analyzing said first image comprises determining a region of interest in said first image and wherein said information comprises is used to adjust said second set of imaging parameters so that the imaging system generates a zoomed-in second image of said region on interest.
[0008] According to another embodiment, analyzing mformation in the method of evaluating crystal growth in a crystal growth system comprises determining whether said first image depicts the presence of crystals, and can further comprise, wherein said first image comprises pixels, and said determining comprises classifying said pixels and comparing the number of pixels classified as crystals to a threshold value.
[0009] According to another embodiment, a method of evaluating crystal growth in a crystal growth system comprises counting the number of said pixels depicting objects in the sample and evaluating said number using a threshold value.
[0010] According to another embodiment of the invention, the method of analyzing crystal growth comprises receiving a first image having pixels depicting crystal growth information of a sample, identifying a first set of pixels in said first image comprising a first region of interest, receiving a second image having pixels depicting crystal growth information of said sample, identifying a second set of pixels in said second image comprising a second region of interest, merging said first set of pixels and said second set of pixels to form a composite image, and analyzing said composite image to identify crystal growth information of said sample. According to another embodiment said first image is generated by an imaging system using a first set of imaging parameters, said second image is generated by said imaging system using a second set of imaging parameters, and wherein said second set of imaging parameters comprises at least one imaging parameter that is different from the imaging parameters in said first set of imaging parameters.
[0011] According to another embodiment of the invention, a method of analyzing crystal growth information comprises receiving a first image comprising a set of pixels that depict the contents of a sample, determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification, generating a second image based on said, information and said set of pixels, displaying said second image, and visually analyzing said second image to determine crystal growth information of the sample.
[0012] According to another embodiment, the invention comprises a system for detecting crystal growth information comprising an imaging subsystem with means for generating an image of a sample, wherein said image comprises pixels that depict the content of said sample, an image analyzer subsystem coupled to said imaging system with means for receiving said image, means for classifying the content of said sample using said pixels and means for determining whether said sample should be re-imaged based on said classifying; and a scheduler subsystem coupled to said imaging analyzer system with means for causing said imaging subsystem to re- image said sample.
[0013] According to another embodiment, the invention comprises a computer- readable medium containing instructions for analyzing samples in a crystal growth system, by receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters, analyzing information depicted in said first image to determine the contents of said sample, determining whether to generate another image of said sample based on the contents of said sample, providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters, receiving said second image of said sample, analyzing information depicted in said second image to determine the contents of said sample.
Brief Description of the Drawings
[0014] The above and other aspects, features, and advantages of the invention will be better understood by referring to the following detailed description, which should be read in conjunction with the accompanying drawings, in which:
[0015] Figure 1 A is a high-level block diagram of an imaging system according to the invention.
[0016] Figure IB is high-level block diagram of another imaging system according to the invention.
[0017] Figure 2 is a perspective view of an imaging system according to the invention.
[0018] Figure 3 is a perspective view of the imaging system shown in Figure 2, viewed from a different angle.
[0019] Figure 4 is a perspective view of the imaging system shown in Figure 2, viewed from yet a different angle.
[0020] Figure 5 is a plan front view of the imaging system shown in Figure 2.
[0021] Figure 6 is a plan, right side view of the imaging system shown in Figure 2.
[0022] Figures 7A and 7B are perspective views from different angles of a lens system as can be used with the imaging system shown in Figure 2.
[0023] Figure 8 is a perspective view from below of a photo-filter carriage that can be used with the imaging system shown in Figure 2.
[0024] Figure 9 is a perspective view of certain components as assembled in the imaging system shown in Figure 2.
[0025] Figure 10 is a plan front view of certain components as assembled in the imaging system shown in Figure 2.
[0026] Figure 11 is a plan, right side view of the components shown in Figure 10.
[0027] Figure 12 is a perspective view of a light source as can be used with the imaging system shown in Figure 2.
[0028] Figure 13 is a perspective view of a sample mount with the light source shown in Figure 12, viewed from a different angle.
[0029] Figure 14A is a plant top view of the light source shown in Figure 12. [0030] Figure 14B is a cross-sectional view along the plane A-A of the light source shown in Figure 14 A.
[0031] Figure 15 is a exploded, perspective view of certain components of the sample mount and the light source shown in Figure 13.
[0032] Figure 16 is a functional block diagram of an illumination duration control circuit as can be used with the light source shown in Figure 12.
[0033] Figure 17 is a functional block diagram of an automated sample analysis system in which the imaging system according to the invention can be used.
[0034] Figure 18 is a block diagram of an imaging and analysis system.
[0035] Figure 19 is a block diagram of a computer that includes a Crystal Resolve analysis module, according to one aspect of the invention.
[0036] Figure 20A is a block diagram of an analysis system process, according to one embodiment of the invention.
[0037] Figure 20B is a block diagram of an analysis system process, according to one embodiment of the invention.
[0038] Figure 21 is a flow diagram of an imaging analysis process, according to one embodiment of the invention.
[0039] Figure 22 is a flow diagram of an imaging analysis and control process, according to one embodiment of the invention.
[0040] Figure 23 is a flow diagram of an analysis process, according to one embodiment of the invention.
Detailed Description of Certain Inventive Embodiments
[0041] Embodiments of the invention will now be described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be inteipreted in any limited or restrictive manner, simply because it is being utilized in conjunction with a detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the inventions herein described.
[0042] The imaging and analysis system and methods disclosed here are related to embodiments of an automated sample analysis system having an imaging system that is described in the related U.S. provisional patent application No. 60/444,519, entitled "AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD." An imaging system that can provide images of samples for analysis, in response to control information, is described hereinbelow, followed by a description of a system and processes for analyzing the images. It should be noted here that the terms "image", "subimage" or "pixels" as used herein at various locations do not necessarily mean an optical image, subimage or pixels which are either usually displayed or printed, but rather include digital representations or other representations of such image, subimage or pixels. It should also be noted that the term "sample" as used herein refers to any type of suitable sample, for example, drops, droplets, the contents of a well, the contents of a capillary, a sample in gel or any other embodiment of containing a sample or material.
[0043] Figure 1A is a high-level block diagram of an imaging system 100. In this embodiment, the imaging system 100 has an assembly 105 that is controlled by controllers and logic 110. The assembly 105 includes a stage 115 that holds and transports target samples to be imaged by an image capture device 120. The imaging system 100 employs an optics assembly 125 to enhance the view of the target samples before the image capture device 120 obtains the images of the samples. An illuminator 130 is configured as part of the assembly 105 to direct light at the samples held in the stage 115.
[0044] The assembly 105 also includes a translator 135 that provides the structural support members and actuators to move any one combination of the stage 115, image capture device 120, optics 125, or illuminator 130. The translator 135 may be configured to move the combination of components in one, two, or three dimensions. As will be discussed in detail below, in some embodiments the stage 115 remains stationary while the translator 135 moves the image capture device 120 and optics 125 to a desired well position in a sample plate held by the stage 115. In other embodiments of the imaging system 100, the translator 135 moves the stage 115 in a first axis and the image capture device 120 and optics 125 in a second axis which is substantially perpendicular to the first axis.
[0045] The controllers and logic 110 of the imaging system 100 provide instructions to and coordinate the activities of the components of the assembly 105. The controllers may include a microprocessor, controller, microcontroller, or any other computing device. The logic includes the instructions to cause the controller to perform the tasks or processing described here.
[0046] Figure IB is high-level block diagram of an imaging system 150. The imaging system 150 includes an assembly 155 in communication with controllers and logic 160. The assembly 155 may also be in communication with a data storage device 190, which itself may be configured for communication with the controllers and logic 160. The controllers and logic 160 control and coordinate the activities of the components of the assembly 155.
[0047] In this embodiment, the assembly 155 includes a sample plate mount 165 suitably configured to receive micro-titer plates of various configurations and sizes. Alternatively, the sample plate mount 165 can be configured to receive any sample matrix that carries samples, regardless of whether the samples are stored in individuals sample wells, rest on the surface of the sample matrix (e.g., as droplets), or are embedded in the sample matrix. A source of flash lighting 180 is arranged to direct light bursts to the samples stored in the micro-titer plate carried by the sample plate mount 165. An inventive system and method of providing the flash lighting 180 will be discussed with reference to Figure 16.
[0048] The assembly 155 includes a compound lens 175 that cooperates with a digital camera 170 to acquire images of the samples in the sample plate. The compound lens 175 may consist, for example, of an objective lens, a zoom lens, and additional optics chosen to provide the digital camera 170 with the desired image from the light from the samples. In one embodiment, as will be discussed further below, the compound lens 175 may be motorized (i.e., provided with one or more actuators) so that the controllers and logic 160 can automatically focus the scene, zoom on the scene, and set the aperture.
[0049] In this embodiment, the assembly 155 includes an x-y translator that moves either the sample plate mount 165 or the compound lens 175, or both. Of course, if the digital camera 170 is coupled to the compound lens 175, the x-y translator moves both the digital camera 170 and the compound lens 175. In some embodiments, the x-y translator 185 is configured to move the sample plate mount 165 in two axis, e.g., x and y coordinates. Alternatively, the x-y translator 185 moves the compound lens 175 in two axis, while the sample plate mount 165 remains stationary. In yet another embodiment, the x-y translator consists of multiple and separate actuators that move independently from one another the sample mount 165 or the compound lens 175.
[0050] It should be noted that the assembly 155, the controllers and logic 160, and the data storage 190 are depicted as separate components for schematic purposes only. That is, in some embodiments of the imaging system 150 it is advantageous to, for example, integrate the data storage device 190 into the assembly 155 and to include the controllers and logic 160 as part of one or more of the components shown as being part of the assembly 155. Similarly, the sample mount 165, digital camera 170, compound lens 175, flash lighting 180, and x-y translator 185 need not all be configured as part of a single assembly 155 as shown.
[0051] Exemplary ways of using and constructing embodiments of the imaging system
100 or 150 will be described in detail below with reference to Figures 2-16, which depict a specific embodiment of the imaging system. Of course, since there are multiple ways to implement the imaging system, the following description of the specific embodiment should not be taken to limit the foil scope of the inventive imaging system. Illustrative Embodiment
[0052] With reference to Figures 2-6 and 9-11, perspective and plan views of an imaging system 200 according to the invention are illustrated. The imaging system 200 includes a sample plate mount 210 that receives a sample plate 212. An x-translator having an actuator 218 (see Fig. 4) is coupled to the sample plate mount 210 to move the sample plate mount 210 into position above a light source 216 and below a lens assembly 230. A digital camera 214 is coupled to the lens assembly 230 to capture images of the wells in the sample plate 212. A y-translator having an actuator 220 (see Fig. 3) is coupled to the lens assembly 230 to move the lens assembly 230 into position over a desired well of the sample plate 212. Support Platform
[0053] The digital camera 214, lens assembly 230, sample plate mount 210, light source 216, x-translator 218, and y-translator 220 are mounted on a platform 240 (see Fig. 2). The platform 240 generally consists of several structural members, brackets, or walls, e.g., base 242, side wall 244, front wall 250, bracket 252, bracket 246, post 248, and support member 254. The light source 216 can be fastened to the base 242. Rails 256 and 258, which support the lens assembly 230 are fastened to the wall 250 of the platform 240 and to the support member 254. The sample plate mount 210 is supported by a rail 262 and an outport guide 253 of the support member 254. The rail 262 is supported through attachment to the side wall 244 and the post 248. Of course, there are multiple, equivalent alternatives to providing support for and configuring the lens assembly 230, sample plate mount 210, light source 216, and x-, y-translators 218 and 220 on the platform 240.
[0054] The platform 240 may be constructed of any of several suitable materials, including but not limited to, aluminum, steel, or plastics. Because in some applications it is critical to keep vibration of the platform 240 to a minimum, materials that provide rigidity to the platform 240 are preferred in such applications. With regard to the rails 256, 258, and 262, these are preferably manufactured with very smooth surfaces to carry the lens assembly 230 or the sample plate mount 210 in a smooth fashion, thereby avoiding vibrations. As illustrated in Figures 3 and 9, supporting the lens assembly 230 may be done by coupling the linear plain bearings 264 and 266 to the rails 256 and 258. A similar coupling using a "bushing" 267 (see Fig. 10) may be employed to fasten the sample plate mount 210 to the rail 262. Bearings 264, 266, and 267 are chosen to provide smooth bearing surfaces for smooth translation of the load, e.g., the lens assembly 230 or the sample plate mount 210.
Sample Plate Mount
[0055] The sample plate mount 210 may be constructed from any rigid material, e.g., steel, aluminum, or plastics. Preferably the sample plate mount 210 is configured to accommodate, either directly or through the use of adapters, various standard sizes of micro-titer plates. Micro- titer plates that may be used with the sample plate mount 210 include, but are not limited to, crystallography plates manufactured by Linbro, Douglas, Greiner, and Corning. As will be described further below, the sample plate mount 210 is coupled to an actuator 218 for moving the sample plate mount 210 in one axis. Translators
[0056] The imaging system 200 includes two independent translators. Typically, the sample plate mount 210 and the lens assembly 230 move on a plane that is substantially parallel to a plane defined by the sample plate 212 carried by the sample plate mount 210. In one embodiment, the controllers and logic 110 or 160 can control x-, y-translators to position the sample plate mount 210 and the lens assembly 230 at the coordinates of a specific well of the sample plate 212.
[0057] An x-axis translator for moving the sample plate mount 210 consists of an actuator 218 (see Fig. 4) that rotates a threaded rod 219 (or "lead screw") about its axis in clockwise or counter-clockwise directions. In the embodiment shown in Figures 3, 4, and 10, the actuator 218 is coupled to the rod 219 via a belt (not shown) and pulleys 221 and 221 '. The sample plate 210 mount is fastened to a "bushing" 267 (see Fig. 10) that rides on the rail 262. The sample plate mount 210 is also supported by the outport guide 253 (see Figs. 6 and 11) of the support member 254. The "bushing" 267 is additionally coupled in a known manner to the rod 219. When the actuator 218 turns in one direction, its power is transmitted via the belt and pulleys 221 and 221' to the rod 219, which then moves the "bushing" 267 and, thereby, moves the sample plate mount 210 in a linear direction.
[0058] A y-axis translator for moving the lens assembly 230 consists of an actuator
220 (see Fig. 3) that rotates a threaded rod 260 about its axis in clockwise or counter-clockwise directions. In the embodiment shown in Figures 3, 6, and 9, for example, the actuator 220 is coupled to the rod 260 through a slotted disc coupling (not shown). The lens assembly 230 is coupled to bearings 264 and 266 that respectively ride on rails 256 and 258. The bearings 264 and 266 are coupled to the rod 260 through plate 255 and the bracket 257 (see Fig. 6) in a known manner. When the actuator 220 turns in one direction, its power is transmitted via the slotted disc coupling to the rod 260, which then moves the bearings 264 and 266 and, thereby, moves the lens assembly in a linear direction.
[0059] The actuators 218 and 220 may be direct current gear motors or 3-phase servo motors, for example. Of course, the type of motors employed as the actuators 218 or 220 will depend on, among other things, the weight of the sample plate mount 210 plus sample plate 212 or the lens assembly 230 and the digital camera 214. Another factor in determining the type of motor is the desired speed. In one embodiment, actuators 218 and 220 having a positioning precision of 10-microns are used. Suitable motors may be obtained from PITMANN® of Harleysville, Pennsylvania.
[0060] In the embodiment of the x-, y-translators described above, each translator mechanism independently translates along an axis of motion each of the sample plate mount 210 and the lens assembly 230. However, it should be noted that in other embodiments of the imaging system 200, it may be desirable to maintain the lens assembly stationary and only move the sample plate mount 210, which would then have one or more translators to position the sample plate mount 210 anywhere in an x-y coordinate area. Similarly, the imaging system 200 may be configured so that an x-y translator (or set of x-, y-translators) moves the lens assembly in the x-y coordinate area, while the sample plate mount 210 remains stationary over the light source 216. In one embodiment, the x-, y-translators employ optical sensors 285 and 287 (see Fig. 5) as to sense the start or end positions ("home positions") of the lens assembly 230 or the sample plate mount 210.
[0061] In yet another embodiment, the imaging system 200 may also include a z-axis translator (not shown) to lift or lower the sample plate mount 210, lens assembly 230, or light source 216. The z-axis translator may consist of, for example, an actuator, a lead screw, one or more rails, and appropriate bearings and fasteners.
[0062] The actuators 218 and 220 may be governed by a controller (not shown).
Suitable controllers may be obtained from J R Kerr Automation Engineering of Flagstaff, Arizona. The controller may be configured to interpret high level commands from a computing device. In one embodiment, when a specific axis is addressed, the controller causes the actuator 220, for example, to move and keeps count of the travel distance and final location. The controller can be programmed to move the actuator 220 at varying speed, torque, and acceleration.
Image Capture Device
[0063] In some embodiments of the imaging system 200, the image capture device can be a film camera, a digital camera, a CMOS camera, a charge coupled device (CCD), and the like, or some other apparatus for capturing an image of an object. The embodiments of the imaging system 200 described here employ a digital camera 214. A suitable digital camera 214 is, for example, a CMOS digital camera. However, it should be apparent that several digital photography devices could also be employed. The CMOS camera 214 is preferred because it provides random access to the image data and is relatively low cost. In conventional imaging systems for crystallography, a CMOS camera is typically not used because in those systems the level of light is insufficient for this type of camera. In contrast, the imaging system 200 is configured to provide the level of light necessary to allow use of a CMOS camera.
[0064] The digital camera 214 can be a CMOS camera having a pixel resolution of
1280 x 1024 pixels, Bayer color filter, a pixel size of 7.5 x 7.5 microns, and a data interface governed by the IEEE 1394 standard (commonly known as "Firewire"). The digital camera 124 may be fully digital and not require a frame grabber. The digital camera 124 may also have a centered pixel area, e.g. a 1024 x 1024 or 800x600 pixel subset of the array, which enhances the image quality since the edges of the array where optical distortions increase are avoided. In one embodiment, the digital camera 214 is connected separately to a host computer (not shown) via a Firewire data interface. This allows for rapid transfer of large amounts of image data, e.g., five images per second.
Lens Assembly
[0065] One embodiment of the lens assembly 230 includes an objective lens 231, a zoom lens 233, and an adapter 235. These optical components are chosen to provide suitable field of view, magnification, and image quality. The objective lens 231, zoom lens 233, and adapter 235 may be purchased from, for example, Navitar Inc. of Rochester, New York.
[0066] In one embodiment, the zoom lens 233 may be the "12X UltraZoom" zoom lens manufactured by Navitar. The zoom lens 233 may provide a 12:1 zoom factor, a focus range of about 12-mm, and an aperture of about 0.14. The zoom lens 233 preferably includes adapters for mounting the objective lens 231. The zoom lens 233 may have actuators 233A, 233B, and 233 C for providing, respectively, automatic aperture adjustment, autozoom, and autofocus functionality. In one embodiment, actuators 233B and 233 C have gear reductions of 262:1. Of course, the gear reduction ratio is chosen to suit the particular application. For example, a 5752:1 gear ratio for the focus actuator 233 C may be too slow for some applications of the imaging system 200. The actuators 233A, 233B, and 233C may be obtained from Navitar or from MicroMo Electronics, Inc. of Clearwater, Florida.
[0067] The objective lens 231 may be, for example, a 5X Mitutoyo infinity Corrected Long Working Distance Microscope Objective (model M Plan Apo 5) microscope accessory. The objective lens 231 is coupled to the zoom lens 233. Since the light source 216 delivers sufficient light to the sample plate 212, the lens assembly 230 is configured to allow for setting a small aperture in order to increase the depth of field. The objective lens 231 preferably provides a working distance that allows adequate room beneath the lens assembly 230 to manipulate a sample plate 212 and provide a photo-filter carriage 237 in the image path. In one embodiment, the working distance of the objective lens 231 is about 34-mm.
[0068] The adapter 235 serves to allow use of the digital camera 214. The adapter 235 may be, for example, a IX Adapter model number 1-6015 sold by Navitar. Of course, different combinations of objective lenses 231 and adapters 235 may be used, e.g., a 2X Adapter and 2X Objective combination. The combination of IX Adapter and 5X Objective provides a suitable image for most applications of the imaging system 200. In some embodiments, it is desirable to use a 0.67X Adapter 235 with a 10X Objective 231, for example, to provide a higher i age resolution.
[0069] The optical components of the lens assembly 230 can be provided with actuators for remote and automatic control. To allow software control of the optical components, controllers and control logic (not shown) can control the actuators 233A, 233B, 233C, and 233D. The actuators (e.g., dc motors) may be coupled to the aperture of the magnification and focus of the zoom lens 233, as well as the photo-filter carriage 237. In some embodiments, the actuators 233A, 233B, 233C, and 233D are preferably provided with encoders to provide position information to the controllers. In one embodiment, the actuators on the lens assembly 230 are 17-mm direct current motors with 100:1 gear reducers. These motors may be obtained from PITMANN® of Harleysville, Pennsylvania.
[0070] The lens assembly 230 may also include a photo-filter carriage 237 that is configured to hold optical filters (not shown). For example, the photo-filter carriage 237 can hold polarization plates or color light filtering plates. Figure 8 illustrates one embodiment of a photo- filter carriage 237 that may be used with the imaging system 200. The photo-filter carriage 237 includes a filter wheel 237 A for receiving one or more photo-filters (not shown) in openings 237B. The photo-filters may be held in place in the filter wheel 237A in a variety of ways. For example, in the embodiment illustrated in Figure 8, caps 237C in cooperation with suitable fasteners hold the photo-filters in place. The filter wheel 237A may be coupled to an actuator 233D for remote and automatic control of the filter wheel 237A. The actuator 233D and the filter wheel 237A may be fastened, in a conventional manner, to a clamp 237D that is coupled to, for example, the objective lens 231 or the zoom lens 233 (see Figs. 1 and 9). In one embodiment, a polarization filter is coupled to a filter wheel so that the polarization filter covers about 90 degrees of the wheel. In this embodiment, the polarization filter can be rotated so that the applied polarization varies between zero and ninety degrees. Thus, the use of the polarization filter with a polarized light source can provide analysis of the effect of samples on polarized light. For example, when a polarized light source and the polarization filter are cross-polarized then minimal light should get to the objective lens 231, unless the sample re-orients the polarized light, such as can happen when the light passes through crystals.
[0071] The digital camera 214 in combination with the lens assembly 230 provides a broad depth of field to allow imaging of objects such as protein crystals at varying depths within a sample droplet stored in a sample well of a sample plate 212. In one embodiment, the lens assembly 230 has a 12:1 zoom lens and, in cooperation with the digital camera 214, can provide a 1 micron optical resolution. In some embodiments, the lens assembly 230 and the digital camera 214 may be integrated as a single assembly.
Light Source
[0072] The light source 216 will now be described with reference to Figures 12-15.
Figure 12 shows a perspective view of the light source 216. Since the crystallization of substances is often highly sensitive to temperature changes, the light source 216 is preferably configured to minimize the amount of heat transferred to the sample plate 212, e.g., by isolating and removing heat generated by the electronics 1408 and illuminators 1402 (see Fig. 14B). Housing
[0073] With reference to Figures 12, 14B and 15, the light source 216 includes a housing 1202 adapted to store one or more illuminators 1402 (see Figs. 14B and 15), cooling elements 1404, heat reflecting glass 1406, light diffuser plate 1206, and corresponding electronics 1405 and 1408. In one embodiment, the housing 1202 consists of a plurality of walls that serve as structural support for the internal components and that substantially isolate the internal components from the external environment. The housing 1202 can be constructed of a variety of materials including, but not limited to, stainless steel, aluminum, and hard plastics. A material with a low coefficient of heat transfer is preferred so as to substantially keep heat generated within the housing 1202 from reaching the outside through the walls of the housing 1202. However, depending on the application, use of metals is appropriate when cooling elements 1404 are provided. In some embodiments, one or more of the internal surfaces of the walls of the housing 1202 may be coated with a suitable material that absorbs or reflects various types of radiation and prevents them from reaching the outside of the housing 1202.
[0074] In the embodiment of the light source 216 shown in Figures 12-14B, the top wall 1204A of the housing 1202 has an opening to receive and support a light diffuser plate 1206. The plate 1206 serves to diffuse light from the illuminators 1402 onto the sample plate 212. The plate 1206 may be, for example, a sheet of translucent plastic. In one embodiment, inside the housing 1202 and adjacent and below the plate 1206, a heat reflecting glass ("hot mirror") 1406 (see Fig. 14B) is provided. The heat reflecting glass 1406 prevents most infra-red energy from exiting the housing 1202.
[0075] The wall 1204B of the housing 1202 may be provided with a plurality of orifices 1208 that allows a cooling element 1404, such as fan, to draw air into the housing 1202 for cooling the internal components. A wall 1204C (see Fig. 14B) of the housing 1202 can be fitted with an opening 1410 for receiving a duct that guides forced air out of the housing 1202. A wall 1204D (see Fig. 13) of the housing 1202 can be fitted with a power plug 1208 and a communications port 1302. The housing 1202 is preferably adapted to isolate an operator of the imaging system 200 from high voltages that may be used to fire the illuminators 1402.
[0076] Of course, the housing 1202 may be configured in a variety of ways not limited to that detailed above. For example, the ventilation openings 1208 on wall 1204B may be replaced by one or more fans built into the wall 1204B or the wall 1204E. Moreover, depending on the specific location of the light source 216 in any given application of the imaging system 200, the ventilation openings 1208 may be located on the bottom wall (not shown) of the housing 1202, for example. Illuminators
[0077] With reference to Figures 14B and 15, the light source 216 includes one or more illuminators 1402 that generate light rays. The illuminators 1402 may be various types, for example, incandescent bulbs, light emitting diodes, or fluorescent tubes of various types including, but not limited to, mercury- or neon-based fluorescent tubes. In one embodiment, the illuminators 1402 are two xenon tubes. Xenon tubes are well known in the relevant technology and are readily available. The xenon tubes 1402 can include borosilicate glass that absorbs ultra-violate radiation. Xenon tubes are preferred because they produce sufficient light to allow use of a CMOS camera 214 in the imaging system 200. Xenon tubes are also preferred since they provide a broad spectrum of light rays, which enables use of color to enhance detection of crystal growth in the wells of the sample plate 212.
[0078] The actual dimensions of the illuminators 1402 are chosen to suit the specific application. For example, in the imaging system 200 the xenon tubes 1402 are long enough to cover one dimension of the sample plate 212 so that it is not necessary to move the light source 216 when the lens assembly 230 or sample plate mount 210 are repositioned. As shown in Figure 14B, the illuminators 1402 may be supported on a board 1405, which may also support electronics for control of the illuminators 1402.
Off-axis Lighting
[0079] In one embodiment, two illuminators 1402 are positioned to provide different locations of the illumination source, e.g., both on-axis and off-axis lighting of the wells in the sample plate 212. As used here, the imaging axis of the lens assembly means the principal axes of the lens assembly. For example, first and second xenon tubes 1402 can be positioned, respectively, a first and second distance from the imaging axis of the lens assembly 230. Typically, the first and second distances are substantially equal in length, and the first xenon tube is positioned opposite the imaging axis from the second xenon tube.
[0080] i one embodiment, the xenon tubes 1402 are mounted about an inch on either side of the area directly under the lens assembly 230. This configuration allows the use of an indirect lighting effect when only one xenon tube is fired. That is, when two xenon tubes are positioned off the imaging axis, the controllers and logic 110 or 160 can control the tubes to provide on-axis or off-axis illumination of the sample plate 212. One xenon tube can be fired to provide off-axis illumination of the sample plate 212. When the two xenon tubes are fired simultaneously a more conventional backlit scene is obtained. In some applications, off-axis illumination is preferred because it produces shadows on small objects in a sample droplet stored in a well of the sample plate 212. The shadows caused by off-axis lighting enhance the ability of the controllers and logic 110 or 160, or an operator, to detect objects in the sample. [0081] In one embodiment, for example the imaging system 150 shown in Figure IB, the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212. The imaging system 150 captures one image with the light source 216 lighting the sample with a first xenon tube. The imaging system 150 captures a second image with the light source 216 lighting the sample with the second xenon tube. The controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data. This results in enhanced characterization of the sample since the combination of the images typically provides more information about crystallization of the sample than a single image acquired with standard back lighting of the scene.
Filters
[0082] In one embodiment, a source filter 270 (Figure 2) may be inserted in a filter slot 272 so that the filter 270 is interposed between the light source 216 and the sample plate 212. The various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed, or exchanged with another filter, by the imaging system 200. The source filter 270 may be any type of filter, such as a wavelength specific filter (e.g. red, blue, yellow, etc.) or a polarization filter.
Flash Mode
[0083] In one embodiment of the imaging system 200, the light source 216 includes one or more illuminators 1402 (e.g., fluorescent tubes) adapted to provide flash lighting. That is the illuminators 1402 are controlled to illuminate only momentarily the sample plate 212 as the digital camera 214 captures an image of a well in the sample plate 212. This arrangement provides benefits over known devices in which illuminators remain in the on-position throughout the entire time that the sample plate 212 is handled by an imaging system. In the imaging system 200, since the illuminators 1402 are turned on for only a fraction of a second per image, very little heat radiation is transferred to the wells of the sample plate 212. Hence, one benefit of this configuration is that the imaging system 200 can provide high illumination levels for the camera 214 while minimizing energy or radiation transfer to the samples in the sample plate 210. An exemplary control circuit 1600 that provides controlled flash lighting is described below with reference to Figure 16.
Flash Lighting Circuitry
[0084] Figure 16 is a functional block diagram of an illumination duration ("flash") control circuit 1600 for an illuminator 1402. Although only one illuminator 1402 and control circuit 1600 is shown, multiple illuminators 1402 can be used and independently controlled using additional control circuits 1600. The illuminator 1402 can be, for example, a xenon tube having a length greater than the maximum width of the sample plate 212 to be used in the imaging system 100, 150, or 200. By having such a dimension, the illuminator 1402 can be located underneath and along one axis of the sample plate 212 to illuminate all the wells in one row or column of the sample plate 212 without repositioning the illuminator 1402.
[0085] A first end of the illuminator 1402 is connected to a first capacitor 1602 and a first resistor 1604. The opposite end of the first resistor 1604 is connected to a power supply 1606. The power supply 1606 may be controlled by a dedicated RS232 line, for example. The opposite or second end of the first capacitor 1602 that is not connected to the illuminator 1402 is connected to ground or a voltage common.
[0086] The second end of the illuminator 1402 is connected to the anode of a first silicon controlled rectifier ("SCR") 1607 and a first terminal of a second capacitor 1608, respectively. An SCR is a solid state switching device that can provide fast, variable proportional control of electric power. A resistor 1620 is connected between the first terminal of the second capacitor and the cathode of a second SCR 1610. The second terminal of the second capacitor 1608 is connected to an anode of the second SCR 1610. The cathode of the first SCR 1607 is connected to the ground or voltage common potential. The cathode of the second SCR 1610 is connected to the cathode of the first SCR 1607 and is similarly connected to ground or the voltage common potential. The anode of the second SCR 1610 is also connected to a second resistor 1614 that connects the anode of the second SCR 1610 to the power supply 1606.
[0087] A trigger 1612 of the illuminator 1402 is connected to the gate of the first SCR
1607 so that both can be triggered simultaneously. This common connection controls the trigger 1612 of the illuminator 1402 and the start of illumination. The gate of the second SCR 1610 controls a stop or end of illumination.
[0088] The duration of illumination provided by the illuminator 1402 can be controlled as follows. Initially, the first and second SCRs 1607 and 1610, respectively, are not conducting. The first capacitor 1602 is charged up to the level of the voltage of the power supply 1606 using the first resistor 1604. The power supply 1606 can, for example, charge the first capacitor to 300 volts or more.
[0089] The size of the first capacitor 1602 relates to the amount of energy that can be transferred to the illuminator 1 02. The illuminator 1402 provides an illumination based in part on the amount of energy provided by the first capacitor 1602. The first capacitor 1602 can be one capacitor or a bank of capacitors. The first capacitor 1602 can be, for example, a 600 μF capacitor.
[0090] The size of the resistors 1620 and 1614 are determined in part by the desired voltage rise time on the second capacitor 1608. Smaller resistors 1620 and 1614 allow the second capacitor 1608 to charge quickly. However, the second SCR 1610 can inadvertently trigger if the voltage impulse at its anode is too great. Thus, the value of the resistors 1620 and 1614 are typically chosen to allow the second capacitor 1608 to recharge before the next image flash trigger, but not to recharge so quickly as to inadvertently trigger conduction in the second SCR 1610. [0091] The resistor 1620 provides an electrical path from the anode of the first SCR
1607 to ground or voltage common to allow the second capacitor 1608 to charge.
[0092] The illuminator 1402 is ready to trigger once the first capacitor 1602 is charged. The second capacitor 1608 is charged by the power supply 1606 through the second resistor 1614 concurrent with the charging of the first capacitor 1602. The second capacitor 1608 is chosen to be large enough to generate a current potential that shuts off the first SCR 1607 and, thus, to terminate illumination by the illuminator 1402. The second capacitor 1608 can be a single capacitor or can be a bank of capacitors. The second capacitor 1608 can be, for example, a 20 μF capacitor.
[0093] After the first and second capacitors 1602 and 1608 have been charged, the duration of illumination can be controlled. The illuminator 1402 initially illuminates when the trigger signal is provided to the control of the illuminator 1402 and the gate of the first SCR 1607. The illuminator 1402 can include a triggering circuit that triggers the illuminator 1402 in response to a logic signal. If the illuminator 1402 does not include this circuit, an external triggering circuit can be included.
[0094] The first SCR 1607 conducts in response to the trigger signal. The first SCR
1607 then continues to conduct even in the absence of a gate signal. The first SCR 1607 can be shut off by interrupting the current through the SCR or by reducing the voltage drop across the first SCR 1607 to below the forward voltage of the device.
[0095] The second SCR 1610 is controlled by a stop signal generator 1616 to connect the second capacitor 1608 in parallel with the first SCR 1607. However, the second capacitor 1608 is charged in opposite polarity to the voltage drop across the first SCR 1607. Thus, when the second SCR 1610 initially conducts, the voltage from the second capacitor 1608 is placed in opposite polarity across the first SCR 1607 thereby shutting off the first SCR 1607.
[0096] After the first SCR 1607 is triggered by a gate signal and begins to conduct, the second end of the illuminator 1402 and the first terminal of the second capacitor 1608 are pulled to ground via the first SCR 1607. The illuminator 1402 then illuminates in response to the current flowing through the illuminator 1402. The second SCR 1610 controls turn-off of the illuminator 1402. The second SCR 1610 begins to conduct when a stop signal is applied to the gate of the second SCR 1610. This pulls the second terminal of the second capacitor 1608 to ground. Because a capacitor resists instantaneous voltage changes, the voltage across the second capacitor
1608 momentarily causes the voltage at the anode of the first SCR 1607 to be pushed below the ground or voltage common potential. A negative voltage at the anode of the first SCR 1607 results in a loss of current flowing through the first SCR 1607, which results in shut down of the first SCR 1607. The second capacitor 1608 discharges almost immediately. The illuminator 1402 shuts off when the first SCR 1607 turns off because there is no longer a current path through the illuminator 1402.
[0097] Thus, a microprocessor, controller, or microcontroller can be programmed to control the trigger 1612 and stop signal generator 1616. The processor controls the trigger signal to initiate illumination with the illuminator 1402. The processor then controls the stop signal to control termination of the illuminator 1402. The processor can thus control the trigger and stop signals to control the duration of the illumination. The processor can control the duration of the illumination (a "flash") in predetermined intervals or can control the duration of the illumination over a range of time. For example, the processor can control the duration of the flash in microsecond steps across an interval of approximately 20μS - 600μS. Alternatively, the processor can control the lower range of the duration of the flash to be 0, 20, 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, or 550μS. hi another alternative, the processor can control the upper range of the duration of the flash to be 40, 50, 75, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550 or 600μS. In one embodiment, the digital camera 214 issues the signal to turn on the illuminator 1402 so that the "flash" will be in synchronization with the electronic shutter of the digital camera 214.
[0098] The power supply 1606 can be a controllable high voltage power supply. The microprocessor, controller, or microcontroller can also control the output voltage of the power supply 1606 to further control the illumination provided by the illuminator 1402. For example, the microprocessor can control the output voltage of the power supply 1606 to vary the illumination provided by the illuminator 1402 for the same illumination duration. Thus, for a given illumination duration, the microprocessor can control the power supply 1606 to a lower output voltage to minimize the illumination. Similarly, for the same illumination duration, the microprocessor can control the power supply 1606 to a higher output voltage, thereby increasing the illumination.
[0099] The microprocessor can control the output voltage of the power supply 1606 over a range of, for example, 180-300 volts. The illuminator 1402 may not consistently illuminate for voltages below 180 volts when the illuminator 1402 is a xenon flash tube. The microprocessor can control the output voltage of the power supply 1606 using a digital control word. Thus, the microcontroller can control the output voltage of the power supply 1606 in steps determined in part by the number of bits in the control word and the tunable range of the power supply 1606. The microcontroller can, for example provide a 10-bit control word, an 8-bit control word, a 6-bit control word, a 4-bit control word, or a 2-bit control word. Alternatively, the power supply 1606 output voltage can be continuously variable over a predetermined range.
[0100] Thus, the microcontroller can control a level of illumination by controlling the illumination duration, the power supply 1606 output voltage, or a combination of the two. The microprocessor's ability to control the combination of the two permits a wider range of brightness outputs than if only one parameter were controllable. The microprocessors ability to control both illumination duration and power supply 1606 output voltage is advantageous for different lens zoom conditions. When magnification is low, such as when the lens is zoomed out, a relatively small amount of light is required. When magnification is high, a relatively large amount of light is required to capture an image. Use of filters and varying apertures may also be used to adjust the amount of light from the light source. Operation
[0101] The imaging system 200 includes software modules that control and direct the lens assembly 230 to perform the following functions. In one embodiment, the imaging system 200 is configured to automatically control the brightness of the image. For example, after the camera 214 captures an image of a well of the sample plate 212, the software determines whether the brightness is within predetermined thresholds. If the brightness does not fall within the thresholds, the controllers and logic of the imaging system 200 iteratively adjust the illumination intensity of the illuminators 1402 to adjust the brightness of the images until the brightness falls within the thresholds, i some embodiments, the brightness of the image may be evaluated based on a predetermined region (or set of pixels) of the image captured.
[0102] The brightness of the illuminators 1402 may be adjusted when capturing a plurality of images of the same sample droplet. In one embodiment, for example, the imaging system 150 shown in Figure IB, the controllers and logic 160 control the assembly 155 to capture two images of a droplet in a well plate of the sample plate 212. The imaging system 150 captures one image with the light source 216 lighting the sample with a first brightness level. The imaging system 150 captures a second image with the light source 216 lighting the sample with a second brightness level. In one embodiment the controllers and logic 160 can then combine the data from both images and perform an analysis based on the combined data, which may result in enhanced characterization of the sample, i some embodiments, the brightness used for the second image may be logically controlled based on analyzing the brightness of the first image, determining if a lighter or darker second image may result in enhanced characterization of the sample, and adjusting the light source 216 to light the sample accordingly.
[0103] The imaging system 200 can also be configured with software to automatically focus the image. An exemplary autofocus routine is as follows. Once the lens assembly 230 is positioned over a sample of the sample plate 210, the objective lens 231 is moved along its imaging axis to a predeteimined starting position. The camera 214 then acquires an image of the sample and/or well at that focus position. In one embodiment, the software obtains a "focus score." This may be done, for example, by examining the brightness values of a set of pixels (e.g., a 500x3 pixel area) in the captured image, applying a low pass filter, and computing the sum of the squares of the differences in brightness of adjacent pixels for the set of pixels. The position and focus score data points are stored in an array. The objective lens 231 is moved to the next predetermined incremental position on its imaging axis, and the process of acquiring an image, computing the focus score, and storing the position and focus score values is repeated. This process continues until the objective lens 231 has been moved to all the predetermined or desired positions, e.g., until it reaches a predetermined end position by incrementally moving in a predetermined step size from the starting position. In one embodiment, the step size depends at least in part upon a predetermined maximum number of images to be acquired during the autofocus routine.
[0104] Next, the software searches the lens position/focus score array to identify the lens position with the best focus score. In one embodiment, the software then proceeds to compute the lens positions that are midway from the best focus score position to positions adjacent to it in the array. That is, the software examines the array of positions already imaged, finds the nearest position greater than the lens position associated with the best focus score, and calculates a "midpoint" position between them. A similar process is performed with regard to the nearest lens position that is less than the best focus score position. The software then acquires images at the midpoint positions and obtains corresponding focus scores. The software once again evaluates the array to identify the image with the best focus score, using a step size that is, say, one-half of the initial step size. These tasks are repeated until, for example, a maximum number of images acquired during autofocus, or a minimum step size, has been reached.
[0105] i some embodiments, the imaging system 200 performs the processes of autofocusing and automatically adjusting the brightness, as described above, for each sample of a sample plate 212 received by the imaging system 200. After the desired brightness and focus are set, the imaging system 200 then captures an image and stores it in, for example, the data storage 190. In one embodiment, the automatically determined brightness and focus are also stored for each sample. In another embodiment, the software of the imaging system 200 calculates and stores a value associated with the mean of the brightness and focus positions for the aggregate of samples of the first plate. This value is then associated with each of the position/focus score data points in the array. Subsequent plates are examined using the mean brightness and focus as initial imaging values.
[0106] The imaging system 200 may also include additional functionality related to automatically finding the edges of a droplet in a well of a sample plate 212. hi one embodiment, after the edges of the drop have been found, the imaging system 200 finds the centroid of the droplet and moves the lens assembly 230 to the centroid. The imaging system 200 then determines the magnification required to image substantially only that area corresponding to the droplet, adjusts the zoom, and acquires the image.
[0107] In another embodiment, the imaging system 200 may be configured to perform automatic adjustment of aperture. In this embodiment, the imaging system 200 receives settings for either maximum image resolution or maximum depth of field. The imaging system 200 then determines the corresponding aperture by, for example, looking one or more tables having values correlating aperture with maximum resolution and/or maximum depth of field. Of course, magnification data may be part of these tables.
[0108] In yet another embodiment, the imaging system 200 may be configured to perform automatic zoom of a substance in a sample stored in a well of the sample plate 212. In one embodiment, for example, the imaging system identifies a "crystal-like object" in the sample, calculates its centroid, moves the lens assembly 230 and digital camera 214 to the centroid, adjusts the zoom level, and captures an image of the "crystal-like object." i another embodiment, the imaging system 200 can be configured to capture an image of a sample or a crystal-like object, perform image analysis of the image, adjust imaging parameters (e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, brightness, etc.) and retake an image of the sample or crystal-like object. The imaging system 200 can perform this process iteratively until predetermined thresholds (e.g., contrast, edge detection, etc.) are met. In some embodiments, the images captured in an iterative process can be either analyzed individually, or can be combined with other images and the resulting image analyzed.
[0109] Thus, in one embodiment of the imaging system 200, the imaging system receives a sample plate 212 and for each sample performs the following functions including, automatic adjustment of brightness and aperture, autofocus, automatic detection of the sample droplet, and acquisition and storage of images. The imaging system 200 stores the aperture, brightness, focus position, drop position and/or size. The imaging system 200 may then use mean values of these factors as initial imaging settings for subsequent plates.
[0110] To increase the amount of data available for analysis of the sample, or crystal detection, in some embodiments an illumination source filter 270 (Figure 2) may be inserted in the filter slot 272 so that the filter 270 is interposed between the light source 216 and the sample plate 212. . In one embodiment the various filters 270 may be inserted and removed from the filter slot 272 by a plate handler. Thus, the filter 270 may be automatically removed or exchanged by the imaging system 200. Alternatively, or additionally, an image filter (such as those that may be placed in the photo-filter carriage 237) may be interposed between the sample droplet in the sample plate 212 and the objective lens 231. In one embodiment, the image filter includes a polarization filter that provides a variable amount of polarization on the light incident on the objective lens 231. The use of these filters can be automatically controlled by imaging software routines and/or determined by operator defined variables.
[0111] The motorized control of aperture, focus, and zoom of the lens assembly 230 in conjunction with remote control of the light source 216 (e.g., brightness and direction of illumination) allows dynamic optimization of contrast, field of view, depth of field, and resolution. Imaging System Integrated With Automated Sample Analysis System
[0112] Figure 17 depicts a functional block diagram of an automated sample analysis system 1700 having an imaging system 100, 150, or 200. The system 1700 includes controllers and logic 1760 for controlling various subsystems housed in a cabinet 1702. The system 1700 can further include a shelf access door 1712 for allowing access to a removable shelf system 1720 and/or a stationary shelf system 1722. In one embodiment, a removable shelf access door 1710 can be provided. The system 1700 can include a transport assembly 1730 that can consist of a plate handler 1732, an elevator assembly 1734, and a rotatable platform 1736. The system 1700 can further include an environmental control subsystem 1765 that employs a refrigeration unit 1762 and/or a heater 1764.
[0113] In one embodiment, the system 1700 also includes an imaging system 200 as has been described above. The imaging system 200, having subcomponents 210, 214, 216, 218, 220, and 230, which are fully detailed above with reference to Figures 2-16, can be housed in the cabinet 1702. This arrangement ensures that the samples in the sample plates remain at all times within the confines of a controlled environment. That is, once a sample plate is stored in the cabinet 1702, it is unnecessary to expose the sample plate to the environment external to the cabinet since the system 1700 is capable of automatically (i.e., without operator intervention) carry out the imaging of the sample within the cabinet 1702.
[0114] Embodiments of an automated sample analysis system 1700 having an imaging system in accordance with the invention are described in the related United States Provisional Patent Application entitled "AUTOMATED SAMPLE ANALYSIS SYSTEM AND METHOD," having U.S. Patent Application No. 60/444,519, which is referenced above. Sample Analysis System
[0115] Figure 18 depicts a block diagram of an imaging and analysis system 1800, according to one embodiment of the invention. The imaging system 1805 can be an imaging system 100, 150, or 200 as described above, or another suitable imaging system that provides similar functionality to the imaging systems described herein. The system 1800 includes an imaging system controller 1820 that provides logical control of the imaging system 1805 to, for example, direct the imaging system 1805 to image a particular sample on a particular sample plate 212, all the samples on the sample plate 212, or image a subset of the samples. The imaging controller 1820 may also control the imaging parameters used by the imaging system 1805. Such imaging parameters can include, for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering and brightness.
[0116] The system 1800 also includes an image storage device 1810 that stores images of samples captured by the imaging system 1805. The image storage device 1810 can be any suitable computer accessible storage medium capable of storing digital images, e.g., a random access memory (RAM), hard disk floppy disk, optical disk, compact disks, or magnetic tape. The system 1800 shows the image storage device 1810 separate from the imaging system 1805. In some embodiments, the image storage device 1810 can be included in the imaging system 1805, or it may be included in a system that may also include an image analyzer 1815, the imaging system controller 1820, or a scheduler 1825. hi one embodiment, a computer includes all the control, scheduling, analysis and imaging software for the system 1800. Alternatively, the software for the system 1800 may reside and run on a plurality of computers that are in communication with each other. In some embodiments, the imaging system 1805 may be configured to provide captured images directly to the image analyzer 1815, or it may be configured to typically store images on the image storage device 1810 and provide images to the image analyzer 1815 as directed by the imaging system controller 1820.
[0117] The scheduler 1825 communicates with the image analyzer 1815 and the imaging system controller 1820 to control the analysis and imaging of samples based on user provided input. For example, the scheduler can schedule the imaging of a particular droplet or a plurality of droplets on a sample plate, and coordinate the imaging of said droplet or plurality of droplets with its subsequent analysis. The scheduler 1825 can use a database 1830 to store information relating to scheduling the images and image specific information, for example, the size of pixels in each of the stored images, in a suitable format for quick retrieval. Knowing the pixel size can allow the analyzer 1815 to reduce sampling to an appropriate density and size for particular objects in the image. The information in the database 1830 can be available with each request to process an image. The database 1830 can reside on the same computer as the scheduler 1825 or on a separate computing device.
[0118] The scheduler 1825 provides an analysis request to the image analyzer 1815.
According to one embodiment, the analysis request includes an image list, including the resolution of each image and the absolute X,Y location of its center. The image list typically contains only one image but may contain a plurality of images. The analysis request can also contain an analysis method including a list of parameters that specify options controlling how to analyze the image(s) and what to report. Additionally, the analysis request can include the Uniform Resource Locator ("URL") of a definition file 1835, i.e., an electronic address that may be on the Internet, such as an ftp site, gopher server, or Web page. The definition file 1835 defines parameters used by the image analyzer 1815, e.g., neural network dimensions, weights and training resolution (e.g., pixel granularity, or the spacing between pixels, of images used to train the neural network). The definition file 1835 may be a single file or a plurality of files, but will be referred to hereinafter in the singular. [0119] The image analyzer 1815 also receives an analysis method file(s) 1840. The analysis method file may be a single file or a plurality of files, but will be referred to hereinafter in the singular. The analysis method file 1840 includes parameters that can be used by the various image analysis modules contained in the image analyzer 1815, e.g., a content analysis module 1930, a notable regions module 1935, and a crystal object analysis module 1940 (Figure 19), described below, according to one embodiment. The image analyzer 1815 can also include functionality that determines the content of an image in terms of objects and/or regions of, for example, crystals or precipitate, or clear regions, that is, regions that do not show any features. The image analyzer 1835 includes a neural network to identify features, e.g., crystals, precipitate, and edges, that are depicted in the image, according to one embodiment. Preferably, the image analyzer 1815 is configured to identify objects and regions of interest in an image quickly enough to allow the system 1800 to re-image specific objects or regions, if desired, while the corresponding sample plate is still in the imaging system 1805.
[0120] The image analyzer 1815 provides an analysis response to the scheduler 1825.
The analysis response, described in further detail below, typically includes the parameters used to for the analysis and the results of the particular analysis performed, e.g., the count of crystal, precipitate, clear and edge samples, regions of crystals, and/or a list and description of objects found in the image.
[0121] The analysis results can be reviewed using an output display 1845 that can be co-located with the scheduler or at a remote location. The output displays may be coupled to the system 1800 via a web server, or via a LAN or other small network topology. Embodiments of a remote output display in accordance with the invention are described in related United States Provisional Patent Application entitled "REMOTE CONTROL OF AUTOMATED LABS," having Application No. 60/444,585.
Illustrative Embodiment
[0122] A computer containing analysis and control modules, and methods related to controlling an imaging and analysis system are illustrated and described with reference to Figures 19-22, according to one embodiment of the invention. Figure 19 depicts a computer 1900 that includes a processor 1905 in communication with memory 1910, e.g., a hard disk and/or random access memory (RAM). The processor 1905 is also in communication with an image analysis module 1960 that can include various modules configured to perform the functionality of the image analyzer 1815 (Figure 18) described herein.
[0123] The computer 1900 may contain conventional computer electronics that are not shown, including a communications bus, a power supply, data storage devices, and various interfaces and drive electronics. Although not shown in Figure 19, it is contemplated that in some embodiments, the computer 1900 may include a video display (e.g., monitor), a keyboard, a mouse, loudspeakers or a microphone, a printer, devices allowing the use of removable media including, but not limited to, magnetic tapes and magnetic and optical disks, and interface devices that allow the computer 1900 to communicate with another computer, including but not limited to a computer network, a LAN, an intranet, or a WAN, e.g., the Internet.
[0124] The computer 1900 is in communication with an imaging storage device, for example, image storage device 1810 (Figure 18), and is configured to receive an image of a sample from the storage device and determine the contents of the sample, using one or more analysis processes. The computer 1900 can be co-located with the image storage device, located near the image storage device, e.g., in the same building, or geographically separated from the image storage device. The computer 1900 can receive the image from the image storage device via, e.g., a direct electronic connection or through a network connection, including a local area network, or a wide area network, including the Internet. It is also contemplated the computer 1900 can receive the image via a suitable type of removable media, e.g., a 3.5" floppy disk, compact disc, ZIP drive, magnetic tape, etc.
[0125] It is contemplated the computer 1900 can be implemented with a wide range of computer platforms using conventional general purpose single chip or multichip microprocessors, digital signal processors, embedded microprocessors, microcontrollers and the like. The computer 1900 can operate independently, or as part of a computing system. The computer 1900 may include stand-alone computers as well personal computers, workstations, servers, clients, minicomputers, main-frame computers, laptop computers, or a network of individual computers. The configuration of the computer 1900 may be based, for example, on Intel Corporation's family of microprocessors, such as the PENTIUM family and Microsoft Corporation's WINDOWS operating systems such as WINDOWS NT, WINDOWS 2000, or WINDOWS XP.
[0126] The computer 1900 includes one or more modules or subsystems that incorporate the analysis processes described herein. As can be appreciated by a skilled technologist, each module can be implemented in hardware or software, or a combination thereof, and comprise various subroutines, procedures, definitional statements, and macros that perform certain tasks. For example, in a software implementation, all the modules are typically separately compiled and linked into a single executable program. The processes performed by each module may be arbitrarily redistributed to one of the other modules, combined together with other processes in a single module, or made available in, for example, a shareable dynamic link library. A module may be configured to reside on the addressable storage medium and configured to execute on one or more processors. Thus, a module may include, by way of example, other subsystems, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. It is also contemplated that the computer 1900 may be implemented with a wide range of operating systems such as Unix, Linux, Microsoft DOS, Macintosh OS, OS/2 and the like.
[0127] The analysis module 1960 can include a pre-processing module 1925 that can filter the received image prior to further processing. The image may be filtered to remove "noise" such as speckles, high frequency noise or low frequency noise that may have been introduced by any of the preceding steps including the imaging step. Filtering methods to remove high frequency or low frequency noise are well known in image processing, and many different methods may be used to achieve suitable results. For example, according to one embodiment in a filtering procedure that removes speckle, for each pixel, the mean and standard deviation of every other pixel along the perimeter of a 5x5 pixel area centered on a pixel are computed. If the center pixel varies by more than a threshold multiplied by the standard deviation, then it is replaced by the mean value. Then the slope of the 5x5 image pixel intensities is calculated and the center pixel is replaced by the mean value of pixels interpolated on a line across the calculated slope.
[0128] The analysis module 1960 also includes one or more modules that perform image analysis to determine information about the sample contents, including content analysis module 1930, notable regions analysis module 1935, and crystal object analysis module 1940. The content analysis module 1930 determines the count of crystal, precipitate, clear and edge pixels in the image, and can be optionally enabled to operate only inside a specific region of the sample. The notable regions analysis module 1935 determines a list of regions of a specified pixel type, e.g., crystal, precipitate, clear and edge pixels. The crystal object analysis module 1940 deteπriines objects containing crystal pixels that meet certain criteria, for example, size, area, or density.
[0129] Figure 19 also shows analysis module 1960 includes a report inner/outer non- clear ratio module 1945 that determines the ratio of non-clear pixel density inside a sample region over non-clear pixel density outside a sample region. The analysis module 1960 also includes a graphical output analysis module 1950 that generate a color-coded image depicting each of the various features found in a sample image in a specified color. These modules are further described hereinbelow. Other analysis modules 1955 that incorporate different image analysis processes may also be included in the analysis module 1960. In one example, an analysis module 1955 can analyze the change in two or more images of the same sample taken at two different times. The analysis module 1955 can receive the count of pixels that are classified as crystal, precipitate, clear or edge pixels in an image of a particular region of a sample at a time TI and save the count information with a reference to the region of a sample imaged. When the same region of a sample is re-imaged at a later time T2, the analysis module 1955 receives the count of pixels that are classified as crystal, precipitate, clear and edge pixels in the image of the sample region at time T2. The analysis module 1955 can compare the count information from time TI and T2 to deteπriine if the droplet contains a crystal(s). One analysis method compares the total number of pixels classified as crystal pixels at time TI and T2 to determine if the sample contains crystal. Another comparison method compares the percentage of crystal pixels at time TI to the percentage of crystal pixels at time T2. If the count or the percentage of crystal pixels increases beyond a threshold value, the sample will be deemed to contain crystals. The other pixel classifications (e.g., precipitate, clear and edge) can also be compared and evaluated to facilitate the crystal analysis. A time-based comparison method, where the count information is saved for one image and compared to a second subsequent image, can be used with any sample processing algorithm.
[0130] In another example, the analysis module 1955 may analyze a series of two or more images crystal growth using a grid approach, hi this analysis method, two images II and 12 are divided up into grids, and the corresponding grids in each image are compared for change in the number of crystal, using, for example, the actual number of pixels or the percentage of crystal pixels. The pixel count information can be kept for each image and used to compare to other images taken at a different time. In any of the analysis methods described herein, the method can include analyzing every pixel, or skipping one or more pixels between the pixels analyzed.
[0131] A scheduler module 1915 and an imaging system controller module 1920 are also included in computer 1900, according to one embodiment. These modules are configured to include functionality that schedules the imaging of sample plates/droplet samples and subsequent analysis of the images, and controls the imaging system 100, 150, 200, 1805, as described herein, e.g., for scheduler 1825 and imaging system controller 1820, respectively.
[0132] The image analysis software package may include support software that performs training and configuring of perception and analysis functionality, e.g., for a neural network. Some of the algorithms included in the image analysis software modules may use stochastic processing and may include the use of a pseudo-random number generation to find answers. All such functions can be provided a random number generator seed in request parameters received by the software module. When the analysis modules are properly configured, the same results should be obtained for a given image given the same parameters that affect its algorithms and any pre-processing of the image. The image analysis modules can be configured so that an analysis method using a pseudo-random number does not affect the results of a different analysis method or software module.
[0133] In one embodiment, the image analysis software works with an image size of, for example, 800 by 600 pixels, a zoomed-in resolution of 2,046 pixels/mm (0.5 μm/pixel), and a zoomed-out resolution of 186 pixels/mm, (5.4 μm/pixel), or 1,024 by 1,024 pixels, a zoomed-in resolution of 2,460 pixels/mm (0.41 μm/pixel), and a zoomed-out resolution of 220 pixels/mm, (4.5 μm/pixel). The image analysis modules may optionally use the same neural network for both zoomed-in and zoomed-out images, however, quality of the results may suffer if only one neural network is used and it may be advantageous to train multiple neural networks, e.g., one for zoomed-in images and one for zoomed-out images. The image analysis software can also be adaptable to other image sizes and pixel resolutions, however, the training of new neural networks may be necessary in order to suitably process these images. If the resolution of the images vary, each definition file may include its training resolution, that is, the spacing between sampled pixels that was used to train the neural network. This information allows the algorithms to consider how to adapt images of varying resolution for use with the neural networks.
[0134] The analysis module receives an analysis request (Figure 18) containing an image list that includes the images to be analyzed. The analysis request also includes, for each image, its resolution in pixels/mm and the absolute X-Y location of the center of the image. Typically, there is only one image in the image list, however, multi-image methods may also be used. The analysis request also includes an analysis method, which is a collection of parameters that specify options controlling how to analyze the images and what to report. In specifying the analysis method, a URL of the definition file is included. The definition file defines the neural network's dimensions, weights and training resolution, i.e., a pixel granularity of the images that were used to train the neural network. Examples of the parameters are first described generally below, and then specifically as they relate to the content analysis module 1930, notable regions analysis module 1935, and the crystal object analysis module 1940, according to one embodiment.
[0135] The analysis request may include parameters that specify how a working copy of the image is prepared for all subsequent processing. For example, parameters can include options for a color to grayscale conversion of the image, and resizing of the image using pixel interpolation methods. Also, the parameters may specify the output of an image, for example, they may specify whether and how an image file representing the pixel interpretation should be generated. This generated image file may be visually displayed and further evaluated by a user. The parameters are also used by the analysis modules, e.g., in the content analysis module, the parameters specify whether an image is scanned and analyzed to determine statistics of its contents in terms of crystal, precipitate, clear and edge features. These parameters specify whether crystallike objects should be searched for and reported. Options may include a scan grid, an ID criteria and the maximum number of objects to find.
[0136] The parameters may also be used by the notable region analysis module 1935 to specify whether notable regions in an image should be reported and, if so, the scan grid in micrometers, the size that is the width times the height in micrometers, the ID criteria, and the quantity of regions to report. The crystal object analysis module 1940 can use the parameters to specify whether effective contiguous subregions of crystals are identified and reported as crystal objects, how this identification should be performed, and the quantity of crystal objects to identify. [0137] The parameters can also specify whether to report the inner/outer non-clear ratio. If this ratio is to be specified, the output includes a ratio of the non-clear pixel density inside a sample region over the non-clear pixel density outside of the sample region. For example, the ratio would be 3.0 if every 100th pixel inside of a sample region is non-clear and every 300th pixel outside of a sample region is non-clear. According to one embodiment, ratios above 1 billion are truncated to that value.
[0138] Image sampling parameters may include, for example, a color processing parameter which specifies how each pixel is converted to a floating point intensity value, or it may specify the linear grayscale for image conversion. If the image is already grayscale, pixels are converted to black, e.g., 0.0, or to white, e.g., 1.0. If color is selected, the pixels are linearly converted to 0.0 to 1.0 with equal channel weighting for each color. Pixel interpolation parameters may include, for example, no pixel interpolation, that is, only a closest pixel method will be used for pixel interpolation. This is generally the fastest interpolation method but typically results in reduced image quality. Interpolation methods that may be selected include bilinear and cubic spline interpolation, which yield higher quality images but they are more computationally complex and take more time or resources to generate. The re-size parameter includes options of 1:1, that is, the image is not resized, automatic, where the image is resized to match the training resolution using the specified interpolation method, and scale factor, where the image is re-sized using this factor and specified interpolation method.
[0139] The analysis modules 1930, 1035, 1940 are configured to receive an analysis request from a scheduler module 1915 and generate a response, as described below. The content analysis module 1930 determines counts of types of pixels in the sample images, e.g., crystal, precipitate, clear and edge pixels, as depicted in the image, hi the illustrative embodiment described herein, the content analysis module 1930 is implemented as a neural network.
[0140] The content analysis module 1930 receives a set of parameters that include parameters that indicate whether this module should be enabled, whether the content analysis should take place inside the sample region only or inside and outside the sample region, and the number of pixels to be skipped during the image analysis. If enable is set to NO, no analysis by the content analysis module 1930 is done and nothing is reported. If enable is set to YES, then the content analysis module analyzes the sample image. If the inside-sample-region-only option is set to YES, the edge of the sample region is found first, and the analysis is done only within the sample region edge. If inside-sample-region-only is set to NO, then checking is done inside and outside the sample region. A process for identifying the edge of a sample region is described hereinbelow in reference to Figure 20, according to one embodiment. If the number of pixels to be skipped is set to 0, all the pixels in the image will be used. If the number of pixels to be skipped is set to 1 , every other pixel in the image will be used for the content analysis, if 2, every third pixel will be used, etc. The default parameter for skipped pixels is typically set to 0.
[0141] The response of the content analysis module 1930 includes an "echo" of the parameters used during the content analysis processing, and the counts of each pixel type pixels of crystal, precipitate, clear and edge samples found in the image. If the inside-sample-region-only option is enabled, the edge count can be used to assess how well the edge of the sample region was found. If it is not enabled, the edge count may be ignored.
[0142] The notable region analysis module 1935 processes an image and determines regions of a specified size that include the minimum levels of crystal, precipitate or non-clear pixels. The request parameters for the notable region analysis module 1935 can include an enable parameter which is set to either "YES" or "NO" that determines if notable region analysis should be performed and reported. The request parameters can also include a region size or area that is used to determine the size of the smallest region the notable region analysis module will identify. A skip-pixel parameter can be included to control the number of pixels that will be skipped during processing, where "0" means to check all of the pixels, "1" means to sample every other pixel, that is, sample the pixels with one unsampled pixel between them, etc. Typically, the default value for skip-pixel is "0."
[0143] The request parameters can also include the maximum number of regions to report and the minimum percentages of crystal pixels, precipitate pixels and non-clear pixels to report. Typically, pixels determined to be edge-type pixels are ignored. The notable region analysis module 1935 can be configured to identify regions with the highest percentage of each specified pixel type. If the regions contain less than the minimum percentage of pixels, it is not saved and the search for regions ends. Regions typically do not go outside of the input image. Newly found regions generally do not overlap existing regions. The report of results from the notable regions analysis module includes all the request parameters and a list of the regions identified The results for each region can include its absolute position, size, the number of crystal pixels and the total pixels sampled, not including edge pixels.
[0144] The crystal object analysis module 1940 identifies small regions in the image that are rich in crystal pixels. The small regions, or objects, comprise one or more "cells." The request parameters for the crystal object analysis module can include an enablement parameter which determines if this analysis should be performed and reported. The request parameters also include a skip-pixels parameter that operates as previously described above, parameters that control the size of the cells identified, for example, a cell-minimum-size parameter to control the smallest width or height of a cell, a cell minimum area which indicates the smallest overall area of a cell, a cell minimum density parameter which indicates the proportion from 0 to 1 of crystal pixels the cell must contain in order to be reported, and an object- inimum-size parameter which indicates one or more dimensions that the overall object must achieve in order to be reported. The request parameters can also include a pseudo-random generator seed which is used for the crystal object analysis stochastic processing. The crystal object analysis module 1940 typically includes the limitation that the center of a cell cannot be inside another cell. Identified cells that touch are grouped and identified as a single crystal object, and the largest overall dimension of the crystal object is computed. If the largest overall dimension is less than the minimum size, the object is discarded. The crystal object analysis processing can also compute an object area as the sum of the cell density times the cell area, and further compute the object centroid. The results from the crystal object analysis module 1940 can include all the request parameters provided to the module, a list of objects identified and their description. The list is sorted in descending order by an object's area. Each object description includes the object area (μm2), the centroid (X, Y in μm) and a list of cells that make up each object. Each cell is described with its absolute position in size (μm), crystal pixel count and total pixel count.
[0145] The graphical output module 1950 generates a representation of the analyzed image which can be displayed and further analyzed. For example, grayscale and/or color coding pixel characteristics may be adjusted by the graphical output module 1950. The analysis request for the graphical output module 1950 includes an image path parameter that defines where the image to be analyzed is found. If the image path parameter is empty, no further processing is done. A base value parameter indicates whether a "base image," i.e., an image used to generate the representation of the analyzed image, is either black, gray or white. If the base value is gray, the base image begins as a grayscale rendition of the resampled image. Otherwise, the base image begins as a white or black image, as indicated by the base value. The parameters include a gray "min" value and a gray "max" value, which are typically from 0 to 1, and specify the linear grayscale compression. For example, adjusting the gray min or max values can control the color coding contrast or flatten the image, and they are typically set to defaults of 0 for the gray min and .75 for the gray max.
[0146] An opaque parameter indicates whether a pixel in the base image should be replaced with the color coding associated with the particular type of corresponding pixel in the analyzed image. For example, if the opaque parameter is set to YES or the base parameter equals black or white, the appropriate color coding replaces the pixel. If the opaque parameter is set to no, the color for a base image pixel is generated by OR'ing the color with the corresponding pixel in the analyzed image. A crystal color parameter provided in the analysis request sets the color coding value for pixels identified as crystals, a precipitate color parameter sets the color coding for precipitate pixels, and an edge color sets the color coding for pixels identified as edges. For example, the default values for the crystal color parameter may be blue, the precipitate color parameter may be green and the edge color parameter may be red. The graphical output module 1950 writes the color coded image file to the image path specified in the request parameters, unless the path parameter is empty or invalid. The generated color-coded image file typically does not contain region annotations, but annotations can be superimposed on the image file by another process, if desired. The graphical output module 1950 provides an analysis report to the scheduler module 1915 that includes the request parameters that were used to produce the color coded image file.
[0147] hi one embodiment, the analysis modules 1930, 1935, 1940 can function as service functions that are capable of quickly identifying objects and/or regions within an image, so that a scheduler module 1915 can dispatch control information to the imaging controller module 1920, which in turn directs the imaging system to re-image specific areas of a droplet using at least one different imaging parameter, (e.g., the magnification or zoom level may be different, a different configuration of lighting, such as, off-axis lighting may be used, etc.), while the sample plate containing the sample just analyzed is in the imaging device. In one embodiment, an analysis module 1960 can analyze at least 10,000 images per day under typical conditions, where the images are less than or equal to 1.0 mega pixels, i.e., the equivalent of processing each image in 8.64 seconds, and where one instance of the image analysis software is running on one PC. The analysis module may be packaged and distributed in a Java 2 file. Java message service may be used to receive requests and send the responses from the analysis module(s). Extensible markup language (XML) may also be used for the analysis requests and responses.
[0148] Test images are used with training software to train the neural networks to analyze crystal growth in sample droplets. As general software implementation of a neural network is well known in the art, only the training of a neural network is described, according to one embodiment of the invention. Training software allows the user to create, open, display, edit and save lists of images in training/test set files, and is described herein according to one embodiment of the invention. The test images include identified subimages containing edge, crystal, precipitate and clear pixels within a wide variety of images. For each image, the user can designate "training subimages" as crystal, precipitate, edge or clear. The resolution of the subimages can be user-adjustable. To minimize user fatigue during image designation, the software can include a single-click designation action that efficiently designates the subimages as crystal, precipitate, edge or clear. The images containing the designated training subimages can be saved as a set of training files. The training software can display training subimages in table form and/or as color-coded markers on an image. Subimages may be moved by either dragging the marker or editing the table. Subimages may also be deleted either from the image or from the table. The training software can be configured to allow a user to define the neural network dimensionality, select a training set file and another file for testing, and perform iterative training and testing using the selected sets of files. Training data, e.g., neural network weights, training and test error, and the number of iterations is saved in a definition file.
[0149] To train the neural network, the intensity levels of pixels in a selected image area, e.g., a subimage, are provided as an input to the neural network. The neural network identifies each pixel as a particular type of pixel, e.g., edge, clear, crystal or clear. The results are compared to what is actually correct, and corresponding error values are calculated. Small adjustments are made to the weights within the neural network based on the error values, and then another test image containing a designated subimage is provided as in input to the neural network. This process is performed for other test images and can be repeated for many thousands of iterations, where each time the weights may be slightly adjusted to provide a more accurate output.
[0150] When the neural network is used for content analysis, an image of a sample droplet is provided as an input to the neural network. The output of the neural network includes a rating for each pixel that indicates a degree of confidence that the pixel depicts each of the different pixel classifications, for example, edge, crystal, precipitate, and clear. The rating is typically between zero and one, where zero indicates the lowest degree of confidence and a 1 indicates the highest degree of confidence. The overall content of an image can be determined counting the number of pixels of each classification by computed as a percentage of the crystal, precipitate, edge and clear pixels contained in the image.
[0151] When considering the content analysis strategy, accuracy of the results is important, but so is the speed of the analysis. Analysis algorithms can allow the user to balance and prioritize the characteristics of speed and quality. For example, one analysis option identifies edges of a drop within the image, and may be used with quick and coarse resolution search parameters to first identify the edge of the drop, and then the interior of the drop may be analyzed with a higher resolution search.
[0152] According to one embodiment of the invention, a supervised learning type of neural network is used to classify the subimages as crystal, precipitate, edge of drop or clear, using the pixel intensity, not the pixel hue. In one embodiment, the entire image is scanned, sampling subimages on a host-specified grid, where the spacing of the grid is in millimeters, not pixels. The resolution of the images is provided as a parameter received from the host. Pie charts can be generated graphically showing the results of the neural network analysis. According to one embodiment, the outputs of a neural network can be summed for each type of object identified and divided by the sum of all the outputs, for example, the results can be A% crystal, B% precipitate, C% clear, where A + B + C = 100%.
[0153] Each image analysis method file contains neural network definitions, e.g., "dimensions" and "weights." The method file also includes parameters that specify the analysis options including whether to perform drop edge detection, and if drop edge detection is selected, the sample grid spacing used to find the edges of the drop, and the sample grid spacing to find crystals within the drop. For example, drop edge detection finds the edge of a drop quickly with a relatively coarse grid spacing scan and then use a relatively fine grid spacing scan inside the drop, according to one embodiment. A database can be used to associate the image analysis file with the image analysis results, so that if a better image analysis method is available at a later time, an image may be re-analyzed using the later analysis method.
[0154] The analysis modules can use a neural network to classify the contents of an image. To aid the neural network in the classification process, a fast operator can be used to identify if a pixel has a particular crystal characteristic. One embodiment of an edge detection process is described below and illustrated in Figure 20A. Color or black and white images of a sample droplet can be generated and used for identifying crystals. At step 2005, the edge detection process 2000 receives the image of a sample that may contain crystals. At step 2010 the process 2000 determines if the image received is a color image. If the image is a color image, it is converted to a grayscale image at step 2015. The image may be filtered at step 2020 to remove minimize undesirable characteristics such as speckle or other types of image "noise" during subsequent processing.
[0155] The edge detection process 2000 uses the gradient of the intensity of the pixels in the image to identify edges. At step 2025, for a plurality of pixels in the image, gradient information is calculated from a 3x3 set of pixels using a calculation based on the best fit of a plane tlirough the image points. The gradient of intensity of the pixel in the center of the 3x3 set of pixels is the direction and magnitude of the maximum slope of the plane. The use of a 3x3 set of pixels helps to eliminate some of the effects of image noise on the process. Gradient information is calculated for selected pixels in the image. All the pixels in the image may be selected, or a subset of the pixels, e.g., an area of interest in the image which may be smaller than the whole image, may be selected. Gradient information is calculated for each selected pixel and stored in three arrays of the same dimensions as the received image. The first array contains the cosine of the angle of the gradient direction. The second array contains the sine of the angle of the gradient direction. The third array contains the magnitude, or steepness, of the gradient. Pixels with a calculated magnitude less than a given threshold have their gradient information set to zero so they are eliminated from further processing.
[0156] At step 2030, edge pixels are identified using the gradient information. An edge pixel can be defined as a pixel for which the magnitude of the gradient of the image is a local maximum in the direction of the gradient. These pixels represent the points at which the rate of change in intensity is the greatest. A separate array of pixels is used (of the same dimensions as the original image) to store this information for further processing. [0157] At step 2035, edge pixels are formed into groups based on the direction of their gradient. A threshold on the difference in direction is used to include or exclude pixels from a group. Each pixel in a group should be adjacent to another pixel in the group. The edge pixels are labeled identifying the group to which they belong. At step 2040, the group(s) with crystal characteristics are selected and at step 2045 the selected groups are provided to another analysis process for aid in further analysis of the image.
[0158] One characteristic that separates a crystal from other objects in an image is the straightness of the edge of the crystal. Figure 20B includes the same steps 2005 - 2035 as in Figure 20A, and then uses the crystal characteristic "straightness" to determine whether a group of pixels depict a crystal. At step 2035 in Figure 20B, edge pixels are formed into a group(s), as described above for Figure 20A. At step 2040, the edge detection process 2000 determines the "straightness" of each labeled group of pixels using linear regression, according to one embodiment. The correlation from the linear regression and the number of pixels in the group is used to determine the "straightness" of the group. The straightness can be defined as the product of the count of pixels in the group and the reciprocal of 1.0 minus the fourth power of the con-elation coefficient for the group, according to one embodiment. If the count of pixels is below a given threshold, the count is set to zero.
[0159] At step 2055, the edge detection process 2000 generates an image, hereinafter referred to as a "lines image," using the previously calculated straightness information. The lines image is the same shape and size as the subset of pixels selected for edge detection. The intensity value for a pixel in the lines image is set to the straightness value of the group that its corresponding pixel belongs to. At step 2060, the lines image, containing information indicating where "straight" pixels may be found, is provided to an analysis module to aid in crystal identification.
[0160] Referring now to Figures 18 and 21, in a typical imaging and analysis process, the scheduler 1825 controls the imaging of samples by communicating to the imaging system controller 1815 the necessary information for imaging a particular plate and the droplet samples on that plate. The imaging system controller 1815 directs the imaging system 1805 to generate the images of the particular plate and droplet sample at a specified time or in a specified sequence, and the images are stored on the image storage device 1810. After an image is generated for a particular sample, the scheduler 1825 sends an analysis request to the image analyzer 1815, and the corresponding image for that sample is provided to the image analyzer 1815. The image analyzer 1815 determines the contents of the image using one or more of the various analysis modules, and provides results to the scheduler 1825 in an analysis response.
[0161] Figure 21 shows a process 2100 that uses the results of analyzing an image for subsequent imaging of the same sample, according to one embodiment of the invention. At step 2105, a first image of a sample is generated using a first set of imaging parameters, which may include for example, focus, depth of field, aperture, zoom, illumination filtering, image filtering, and/or brightness. An analysis process receives the first image at step 2110 and analyzes the first image in accordance with the analysis request at step 2115. At step 2120, the process 2100 determines whether crystal formation in the first image is suspected, the presence of which can make an additional image of the sample desirable. For example, to deteimine if an additional image is desired, a score can be computed for the image. The score can be based upon user- adjustable thresholds and weighting factors, allowing the user to tailor preferences with experienced personal judgment. If the overall score exceeds a specific threshold, reimaging is warranted and an appropriate reimaging request is dispatched. Scoring and threshold may be a function of apparent image content and/or also a function of system bandwidth and scheduling issues. The more available system resources are, e.g., the imaging subsystem, the more likely zoomed-in reimaging occurs.
[0162] The analysis of the first image at step 2120 can be done using a relatively fast running process, e.g., determining the inner/outer non-clear ratio for the droplet sample, and a further, more thorough analysis can be done at step 2140, according to one embodiment. At step 2125, information is provided to the imaging system that allows the same sample to be re-imaged to create a second image of the sample. Subsequent images generated of the same sample can use imaging parameters that are different than those used to generate the first image, that is, at least one value of a imaging parameter used to generate the second image is different than the values of the imaging parameters used to create the first image. At step 2135, the process 2100 receives the second image of the sample and analyzes the second image at step 2140 using, for example, the analysis methods described herein. Analysis results are output for evaluation or display at step 2145.
[0163] Using the analysis data as feedback to the imaging process and adjusting the imaging parameters accordingly, subsequently generated images can more clearly show the presence of crystal formation. For example, if the formation of crystal in the sample droplet is suspected as a result of analyzing the first image, information can be communicated to the imaging system to zoom-in on the area where the crystal formation is suspected and re-image the droplet using a higher magnification. Other imaging parameters, e.g., focus, depth of field, aperture, zoom, illumination filtering, image filtering, and brightness, can also be changed to obtain an image that may better depict the contents of the sample.
[0164] Timely analysis of the first image can result in a relatively large time savings if a subsequent image of a particular sample is desired. The process for handling a sample plate containing the sample, e.g., fetching the correct plate from a storage location, placing the plate in the imaging device, and returning the plate to its storage location, is very time consuming. When thousands of images are scheduled to be generated in one day, minimizing the amount of plate handling during image generation increases image generation and analysis throughput. According to one embodiment, the images generated from the samples on a sample plate are completely analyzed before the plate is removed from the imaging device. If desired, additional subsequent images of a sample contained on that plate can then be generated without incurring the time required to re-fetch the plate. In another embodiment, a certain percentage of the images are analyzed before the plate is removed. While this may not allow every sample to be re-imaged without re-fetching the plate, e.g., the analysis of the last sample imaged may not be completed before the plate is removed, it may still result in an overall time savings as it may allow quick reimaging of most of the samples, if desired, while not unduly delaying the removal of the plate from the imaging device.
[0165] Figure 22 illustrates a process 2200 that includes generating two images of a sample, where each image is generated using a set of imaging parameters that has at least one different imaging parameter than those used for the other image, according to one embodiment of the invention. At step 2205 a first image is generated using a first set of imaging parameters. At step 2210, the first image is received by an analysis process which determines one or more regions of interest in the first image at step 2215. The analysis process may be, for example, an edge detection process or a process implemented in one of the analysis modules, both or which are described hereinabove.
[0166] At step 2220, a second image is generated using a second set of imaging parameters where the second set of imaging parameters includes at least one imaging parameter that is different than the first set of imaging parameters. One or more imaging parameter may be changed to generate the second image. For example, the focal plane may be set to a different height relative to the droplet sample, the illumination of the sample may be changed, including using a different direction of illumination (e.g., lighting the sample from alternate sides and off- axis lighting) or a different illumination brightness level, the magnification or zoom level used may be changed, and different filtering may be used for each image (e.g., polarizing filters). At step 2225 the second image is received by an analysis process, and analyzed to determine a region or regions of interest at step 2230.
[0167] At step 2235, the regions of interest from the first and second images are combined to form a composite image. Typically, the composite image is the same size as the first and second images. The first and second of images are analyzed to determine the portion or portions of each image that will be used to form the composite image. The composite images is generated by copying the values of the pixels from each region of interest in the first and second images into one composite image. At step 2240, the composite image is analyzed for the presence of crystal formation by a user, or automatically by an automatic or interactive analysis method, e.g., using the content analysis module, the notable regions analysis module, the crystal object analysis module, or a report inner/outer non-clear ratio module, as previously described, and the results are output at step 2245.
[0168] Although process 2200 shows a process to form a composite image using two images generated with different imaging parameters, more than two images may also be generated and used to form composite images, where each image is generated using at least one different imaging parameter, according to another embodiment. For example, according to one embodiment, a plurality of images are generated for a sample where the focal plane for each image is set at a different "height" relative to the sample. The resulting images may show varying sharpness in corresponding locations. The sharpness of the corresponding portions of the images are compared to determine which portion of each image should form the composite image. The portion of each image that best satisfies specified sharpness criteria, e.g., where a selected set of pixels exhibits the greatest contrast, may be selected from the plurality of images to form the composite image. The size of a portion of an image compared to the other images may be as small as a single pixel or several pixels, and may be as large as tens of pixels or hundreds of pixels, or even larger.
[0169] Figure 23 illustrates a process 2300 for visual evaluation of crystal growth by a user, according to another embodiment of the invention. At step 2305, process 2300 receives an image of a sample. At step 2310, the process 2300 classifies the pixels of the image according to their depiction of the contents of the sample, e.g., the pixels are classified as depicting crystal, precipitate, clear or an edge. The pixels of the image may be classified by processes incorporated into the content analysis module 1930, the notable regions analysis module 1935, the crystal object analysis module 1940, as described above, or another suitable analysis process. At step 2315, process 2300 generates a second image that is color-coded using the pixel classification information from step 2310. Step 2315 may be performed by the above-described graphical output analysis module 1950. To generate the second image, pixels that were classified as edge, precipitate or crystal pixels are depicted as a particular color, e.g., red for crystal pixels, green for precipitate pixels, and blue for edge pixels. One or all the classified pixels may be depicted according to a color-code scheme. The second image can have opaque color-coded information, or translucent color-coded information that also shows the original image through the color. The second image is typically the same size and shape as the image received at step 2305.
[0170] At step 2320, the color-coded second image is visually displayed, for example, on a computer monitor or on a printout. At step 2325, the second image is visually analyzed to determine crystal growth information of the droplet sample. Displaying the color-coded image to a user facilitates efficient interpretation of the contents of the image and allows the presence of crystals in the image to be easily and visualized. [0171] The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.

Claims

WHAT IS CLAIMED IS:
1. A method of evaluating crystal growth in a crystal growth system, comprising: receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters; analyzing information depicted in said first image to determine the contents of said sample; determining whether to generate another image of said sample based on the contents of said sample; and providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one imaging parameter that is different from an imaging parameter in said first set of imaging parameters.
2. The method of Claim 1 , further comprising receiving said second image of said sample.
3. The method of Claim 2, further comprising analyzing information depicted in said second image to determine the contents of said sample.
4. The method of Claim 1 , wherein said different imaging parameter is depth-of-field.
5. The method of Claim 1, wherein said different imaging parameter is illumination brightness level.
6. The method of Claiml, wherein said different imaging parameter is illumination source type.
7. The method of Claim 1 , wherein said different imaging parameter is magnification.
8. The method of Claim 1, wherein said different imaging parameter is polarization.
9. The method of Claim 1, wherein said different imaging parameter is illumination source position.
10 The method of Claim 1, wherein said different imaging parameter is the location of the area imaged.
(
11. The method of Claim 1 , wherein said different imaging parameter is the focus.
12. The method of Claim 1 , wherein said analyzing information comprises determining whether said first image depicts the presence of crystals.
13. The method of Claim 12, wherein said first image comprises pixels, and said determining comprises classifying said pixels and comparing the number of pixels classified as crystals to a threshold value.
14. The method of Claim 13, wherein said classifying comprises using a neural network.
15. The method of Claim 12, wherein said first image comprises pixels, and wherein said determining comprises counting the number of said pixels depicting objects in the sample and evaluating said number using a threshold value.
16. The method of Claim 12, wherein said first image comprises pixels, and wherein said determining comprises classifying said pixels as either clear or non-clear and evaluating said classified pixels using a threshold value.
17. The method of Claim 1, wherein said analyzing information depicted in said first image comprises determining a region of interest in said first image and wherein said information is used to adjust said second set of imaging parameters so that the imaging system generates a zoomed-in second image of said region on interest.
18. The method of Claim 1, wherein said analyzing information depicted in said first image comprises displaying said first image and receiving user input based on said displayed image.
19. A method of analyzing crystal growth, comprising: receiving a first image having pixels depicting crystal growth information of a sample; identifying a first set of pixels in said first image comprising a first region of interest; receiving a second image having pixels depicting crystal growth information of said sample; identifying a second set of pixels in said second image comprising a second region of interest; merging said first set of pixels and said second set of pixels to form a composite image; and analyzing said composite image to identify crystal growth information of said sample.
20. The method of Claim 19, wherein said first image is generated by an imaging system using a first set of imaging parameters, said second image is generated by said imaging system using a second set of imaging parameters, and wherein said second set of imaging parameters comprises at least one imaging parameter that is different from the imaging parameters in said first set of imaging parameters.
21. The method of Claim 20, wherein different imaging parameter is depth-of-field.
22. The method of Claim 20, wherein said different imaging parameter is illumination brightness level.
23. The method of Claim 20, wherein said different imaging parameter is illumination source type.
24. The method of Claim 20, wherein said different imaging parameter is magnification.
25. The method of Claim 20, wherein said different imaging parameter is polarization.
26. The method of Claim 20, wherein said imaging parameter is illumination source position.
27. The method of Claim 20, wherein said different imaging parameter is the location of the area imaged.
28. The method of Claim 20, wherein said different imaging parameter is the focus.
29. A method of analyzing crystal growth information, comprising: receiving a first image comprising a set of pixels that depict the contents of a sample; determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification; generating a second image based on said information and said set of pixels; and displaying said second image.
30. The method of Claim 29, further comprising determining crystal growth information of the sample.
31. The method of Claim 29, wherein said classification comprises crystals.
32. The method of Claim 31, wherein said classification further comprises precipitate, edges, or clear.
33. A system for detecting crystal growth information, comprising: an imaging subsystem with means for generating an image of a sample, wherein said image comprises pixels that depict the content of said sample; an image analyzer subsystem coupled to said imaging system with means for receiving said image, means for classifying the content of said sample using said pixels and means for determining whether said sample should be re-imaged based on said classifying; and a scheduler subsystem coupled to said imaging analyzer system with means for causing said imaging subsystem to re-image said sample.
34. A computer-readable medium containing instructions for analyzing samples in a crystal growth system, by: receiving a first image of a sample, said first image generated by an imaging system using a first set of imaging parameters; analyzing information depicted in said first image to determine the contents of said sample; determining whether to generate another image of said sample based on the contents of said sample; providing information to said imaging system to generate a second image of the sample using a second set of imaging parameters, wherein said second set of imaging parameters comprises at least one ύnaging parameter that is different from an imaging parameter in said first set of imaging parameters; receiving said second image of said sample; and analyzing information depicted in said second image to determine the contents of said sample.
35. A computer-readable medium containing instructions for analyzing crystals, by: receiving a first image having pixels depicting crystal growth information of a sample; identifying a first set of pixels in said first image comprising a first region of interest; receiving a second image having pixels depicting crystal growth information of said sample; identifying a second set of pixels in said second image comprising a second region of interest; merging said first set of pixels and said second set of pixels to form a composite image; and analyzing said composite image to identify crystal growth information of said sample.
36. A computer-readable medium containing instructions for analyzing crystal growth information, by: receiving a first image comprising a set of pixels that depict the contents of a sample; determining information for each pixel in said set of pixels, wherein said information comprises a classification describing the type of sample content depicted by said each pixel, and a color code associated with each classification; generating a second image based on said information and said set of pixels; displaying said second image; and visually analyzing said second image to determine crystal growth information of the sample.
37. A method of analyzing crystal growth, comprising: generating a first image having pixels depicting crystal growth information of a sample at a first time; generating a second image having pixels depicting crystal growth information of said sample at a second time; analyzing said first image and said second image to identify crystal growth information of said sample.
38. The method of Claim 37, wherein analyzing comprises comparing the number of pixels depicting crystal growth in said first image with the number of pixels depicting crystal growth in said second image.
39. The method of Claim 37, wherein analyzing comprises comparing the number of pixels within grid elements of said first image with the number of pixels within respective grid elements of said second image.
40. The method of Claim 39, wherein the size of the grid elements defined by dividing up the said first image and said second image can vary from 1 pixel up to the total number of pixels in each image.
PCT/US2004/002633 2003-01-31 2004-01-30 Image analysis system and method WO2004070653A2 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US44458603P 2003-01-31 2003-01-31
US44458503P 2003-01-31 2003-01-31
US44451903P 2003-01-31 2003-01-31
US60/444,586 2003-01-31
US60/444,519 2003-01-31
US60/444,585 2003-01-31
US47498903P 2003-05-30 2003-05-30
US60/474,989 2003-05-30

Publications (2)

Publication Number Publication Date
WO2004070653A2 true WO2004070653A2 (en) 2004-08-19
WO2004070653A3 WO2004070653A3 (en) 2005-01-06

Family

ID=32854497

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/US2004/002633 WO2004070653A2 (en) 2003-01-31 2004-01-30 Image analysis system and method
PCT/US2004/003239 WO2004069984A2 (en) 2003-01-31 2004-01-30 Automated imaging system and method
PCT/US2004/002717 WO2004069409A2 (en) 2003-01-31 2004-01-30 Automated sample analysis system and method
PCT/US2004/002617 WO2004071067A2 (en) 2003-01-31 2004-01-30 Data communication in a laboratory environment

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/US2004/003239 WO2004069984A2 (en) 2003-01-31 2004-01-30 Automated imaging system and method
PCT/US2004/002717 WO2004069409A2 (en) 2003-01-31 2004-01-30 Automated sample analysis system and method
PCT/US2004/002617 WO2004071067A2 (en) 2003-01-31 2004-01-30 Data communication in a laboratory environment

Country Status (2)

Country Link
US (4) US20040253742A1 (en)
WO (4) WO2004070653A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2703871A3 (en) * 2005-05-25 2014-09-03 Massachusetts Institute Of Technology Multifocal scanning microscopy systems and methods
RU2799730C1 (en) * 2022-11-15 2023-07-11 Общество с ограниченной ответственностью "Норгау Лабс" Device for reciprocating linear movement of the measuring microscope

Families Citing this family (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7158888B2 (en) 2001-05-04 2007-01-02 Takeda San Diego, Inc. Determining structures by performing comparisons between molecular replacement results for multiple different biomolecules
EP1414576A4 (en) * 2001-07-18 2007-07-18 Irm Llc High throughput incubation devices
US7632467B1 (en) * 2001-12-13 2009-12-15 Kardex Engineering, Inc. Apparatus for automated storage and retrieval of miniature shelf keeping units
US7433546B2 (en) * 2004-10-25 2008-10-07 Apple Inc. Image scaling arrangement
TWI267623B (en) * 2002-08-01 2006-12-01 Ming-Liau Yang Component monitoring method
US20040138827A1 (en) * 2002-09-23 2004-07-15 The Regents Of The University Of California Integrated, intelligent micro-instrumentation platform for protein crystallization
DE10353966A1 (en) * 2003-11-19 2005-06-30 Siemens Ag Method for access to a data processing system
US20050168353A1 (en) * 2004-01-16 2005-08-04 Mci, Inc. User interface for defining geographic zones for tracking mobile telemetry devices
JP4714939B2 (en) * 2004-06-25 2011-07-06 新世代株式会社 Pixel mixer
IL162921A0 (en) * 2004-07-08 2005-11-20 Hi Tech Solutions Ltd Character recognition system and method
US20060024746A1 (en) * 2004-07-14 2006-02-02 Artann Laboratories, Inc. Methods and devices for optical monitoring and rapid analysis of drying droplets
US8775823B2 (en) 2006-12-29 2014-07-08 Commvault Systems, Inc. System and method for encrypting secondary copies of data
US7639401B2 (en) * 2004-12-15 2009-12-29 Xerox Corporation Camera-based method for calibrating color displays
US7639260B2 (en) * 2004-12-15 2009-12-29 Xerox Corporation Camera-based system for calibrating color displays
EP1671530B1 (en) * 2004-12-18 2008-01-16 Deere & Company Harvesting machine
JP2006202209A (en) * 2005-01-24 2006-08-03 Toshiba Corp Image compression method and image compression device
KR100602972B1 (en) 2005-02-15 2006-07-20 한국과학기술연구원 Protein crystal inspection system
US7275594B2 (en) * 2005-07-29 2007-10-02 Intelliserv, Inc. Stab guide
DE102005047326B3 (en) * 2005-09-30 2006-11-02 Binder Gmbh Climate-controlled test cupboard for long-term storage stability tests on prescription medicines has spherical light detectors
US7930369B2 (en) 2005-10-19 2011-04-19 Apple Inc. Remotely configured media device
JP4923541B2 (en) * 2005-11-30 2012-04-25 株式会社ニコン microscope
US7636466B2 (en) * 2006-01-11 2009-12-22 Orbotech Ltd System and method for inspecting workpieces having microscopic features
DE102006001881A1 (en) * 2006-01-13 2007-07-19 Roche Diagnostics Gmbh Packaging cassette for reagent carriers
US8799043B2 (en) 2006-06-07 2014-08-05 Ricoh Company, Ltd. Consolidation of member schedules with a project schedule in a network-based management system
US8050953B2 (en) 2006-06-07 2011-11-01 Ricoh Company, Ltd. Use of a database in a network-based project schedule management system
US8577171B1 (en) * 2006-07-31 2013-11-05 Gatan, Inc. Method for normalizing multi-gain images
US7853100B2 (en) * 2006-08-08 2010-12-14 Fotomedia Technologies, Llc Method and system for photo planning and tracking
US7670555B2 (en) * 2006-09-08 2010-03-02 Rex A. Hoover Parallel gripper for handling multiwell plate
DE102006044091A1 (en) 2006-09-20 2008-04-03 Carl Zeiss Microimaging Gmbh Control module and control system for influencing sample environment parameters of an incubation system, method for controlling a microscope assembly and computer program product
US7826652B2 (en) * 2006-12-19 2010-11-02 Cytyc Corporation Method for forming an optimally exposed image of cytological specimen
US8107675B2 (en) * 2006-12-29 2012-01-31 Cognex Corporation Trigger system for data reading device
US9557217B2 (en) 2007-02-13 2017-01-31 Bti Holdings, Inc. Universal multidetection system for microplates
US9152433B2 (en) 2007-03-15 2015-10-06 Ricoh Company Ltd. Class object wrappers for document object model (DOM) elements for project task management system for managing project schedules over a network
US8826282B2 (en) * 2007-03-15 2014-09-02 Ricoh Company, Ltd. Project task management system for managing project schedules over a network
US20080235719A1 (en) * 2007-03-16 2008-09-25 Sharma Yugal K Image analysis for use with automated audio extraction
US20090040763A1 (en) * 2007-03-20 2009-02-12 Chroma Technology Corporation Light Source
EP1972874B1 (en) * 2007-03-20 2019-02-13 Liconic Ag Automated substance warehouse
GB0705652D0 (en) * 2007-03-23 2007-05-02 Trek Diagnostics Systems Ltd Test plate reader
DE102007023325B4 (en) * 2007-05-16 2010-04-08 Leica Microsystems Cms Gmbh Optical device, in particular a microscope
US7882177B2 (en) * 2007-08-06 2011-02-01 Yahoo! Inc. Employing pixel density to detect a spam image
CA2708211C (en) * 2007-08-17 2015-01-06 Oral Cancer Prevention International, Inc. Feature dependent extended depth of focusing on semi-transparent biological specimens
KR100945884B1 (en) * 2007-11-14 2010-03-05 삼성중공업 주식회사 Embedded robot control system
CN101470326B (en) * 2007-12-28 2010-06-09 佛山普立华科技有限公司 Shooting apparatus and its automatic focusing method
US20090217241A1 (en) * 2008-02-22 2009-08-27 Tetsuro Motoyama Graceful termination of a web enabled client
US20090217240A1 (en) * 2008-02-22 2009-08-27 Tetsuro Motoyama Script generation for graceful termination of a web enabled client by a web server
US7941445B2 (en) * 2008-05-16 2011-05-10 Ricoh Company, Ltd. Managing project schedule data using separate current and historical task schedule data and revision numbers
US20090287522A1 (en) * 2008-05-16 2009-11-19 Tetsuro Motoyama To-Do List Representation In The Database Of A Project Management System
US8706768B2 (en) 2008-05-16 2014-04-22 Ricoh Company, Ltd. Managing to-do lists in task schedules in a project management system
US8321257B2 (en) * 2008-05-16 2012-11-27 Ricoh Company, Ltd. Managing project schedule data using separate current and historical task schedule data
US8352498B2 (en) 2008-05-16 2013-01-08 Ricoh Company, Ltd. Managing to-do lists in a schedule editor in a project management system
US20100070328A1 (en) * 2008-09-16 2010-03-18 Tetsuro Motoyama Managing Project Schedule Data Using Project Task State Data
US8862489B2 (en) * 2008-09-16 2014-10-14 Ricoh Company, Ltd. Project management system with inspection functionality
WO2010081536A1 (en) * 2009-01-13 2010-07-22 Bcs Biotech S.P.A. A biochip reader for qualitative and quantitative analysis of images, in particular for the analysis of single or multiple biochips
JP5324934B2 (en) * 2009-01-16 2013-10-23 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
KR20100109195A (en) * 2009-03-31 2010-10-08 삼성전자주식회사 Method for adjusting bright of light sources and bio-disk drive using the same
KR101493133B1 (en) * 2009-07-01 2015-02-12 가부시키가이샤 니콘 Exposure condition evaluation method and exposure condition evaluatin apparatus
JP5576631B2 (en) * 2009-09-09 2014-08-20 キヤノン株式会社 Radiographic apparatus, radiographic method, and program
WO2011066269A1 (en) * 2009-11-24 2011-06-03 Siemens Healthcare Diagnostics Inc. Automated, refrigerated specimen inventory management system
US8759084B2 (en) 2010-01-22 2014-06-24 Michael J. Nichols Self-sterilizing automated incubator
DE102010060634B4 (en) * 2010-11-17 2013-07-25 Andreas Hettich Gmbh & Co. Kg Air conditioning room for a time-controlled storage of samples and methods for time-controlled storage of samples
US8396876B2 (en) 2010-11-30 2013-03-12 Yahoo! Inc. Identifying reliable and authoritative sources of multimedia content
US9522396B2 (en) 2010-12-29 2016-12-20 S.D. Sight Diagnostics Ltd. Apparatus and method for automatic detection of pathogens
JP5247942B2 (en) * 2011-02-24 2013-07-24 三洋電機株式会社 Conveyor, culture device
US8640964B2 (en) 2011-06-01 2014-02-04 International Business Machines Corporation Cartridge for storing biosample plates and use in automated data storage systems
US9286914B2 (en) 2011-06-01 2016-03-15 International Business Machines Corporation Cartridge for storing biosample capillary tubes and use in automated data storage systems
US8380541B1 (en) 2011-09-25 2013-02-19 Theranos, Inc. Systems and methods for collecting and transmitting assay results
US9619627B2 (en) 2011-09-25 2017-04-11 Theranos, Inc. Systems and methods for collecting and transmitting assay results
US20130073221A1 (en) * 2011-09-16 2013-03-21 Daniel Attinger Systems and methods for identification of fluid and substrate composition or physico-chemical properties
CN106840812B (en) 2011-12-29 2019-12-17 思迪赛特诊断有限公司 Methods and systems for detecting pathogens in biological samples
US9449380B2 (en) 2012-03-20 2016-09-20 Siemens Medical Solutions Usa, Inc. Medical image quality monitoring and improvement system
JP5994337B2 (en) * 2012-03-30 2016-09-21 ソニー株式会社 Fine particle sorting device and delay time determination method
EP2852820B1 (en) * 2012-05-31 2023-05-03 Agilent Technologies, Inc. Universal multi-detection system for microplates
JP6034073B2 (en) * 2012-07-03 2016-11-30 株式会社Screenホールディングス Image analysis apparatus and image analysis method
US9250254B2 (en) 2012-09-30 2016-02-02 International Business Machines Corporation Biosample cartridge with radial slots for storing biosample carriers and using in automated data storage systems
US11187713B2 (en) * 2013-01-14 2021-11-30 Stratec Se Laboratory module for storing and feeding to further processing of samples
GB2509758A (en) * 2013-01-14 2014-07-16 Stratec Biomedical Ag A laboratory module for storing and moving samples
CN109813923A (en) * 2013-02-18 2019-05-28 赛拉诺斯知识产权有限责任公司 System and method for acquiring and transmitting measurement result
HU230739B1 (en) * 2013-02-28 2018-01-29 3Dhistech Kft. Apparatus and method for automatic staining masking, digitizing of slides
US20140281516A1 (en) 2013-03-12 2014-09-18 Commvault Systems, Inc. Automatic file decryption
EP2784476B1 (en) * 2013-03-27 2016-07-20 Ul Llc Device and method for storing sample bodies
WO2014188405A1 (en) 2013-05-23 2014-11-27 Parasight Ltd. Method and system for imaging a cell sample
US9809898B2 (en) * 2013-06-26 2017-11-07 Lam Research Corporation Electroplating and post-electrofill systems with integrated process edge imaging and metrology systems
IL227276A0 (en) * 2013-07-01 2014-03-06 Parasight Ltd A method and system for preparing a monolayer of cells, particularly suitable for diagnosis
ES2724327T3 (en) * 2013-07-25 2019-09-10 Theranos Ip Co Llc Systems and methods for a distributed clinical laboratory
US10831013B2 (en) 2013-08-26 2020-11-10 S.D. Sight Diagnostics Ltd. Digital microscopy systems, methods and computer program products
US9822460B2 (en) 2014-01-21 2017-11-21 Lam Research Corporation Methods and apparatuses for electroplating and seed layer detection
JP6687524B2 (en) 2014-01-30 2020-04-22 ビーディー キエストラ ベスローテン フェンノートシャップ System and method for image acquisition using supervised high quality imaging
US11041871B2 (en) 2014-04-16 2021-06-22 Bd Kiestra B.V. System and method for incubation and reading of biological cultures
DE102014011941B3 (en) * 2014-08-14 2015-08-20 Ika-Werke Gmbh & Co. Kg Shelf and incubator
EP3186778B1 (en) 2014-08-27 2023-01-11 S.D. Sight Diagnostics Ltd. System and method for calculating focus variation for a digital microscope
DE102014217328A1 (en) * 2014-08-29 2016-03-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and device for imaging in microscopy
US9405928B2 (en) * 2014-09-17 2016-08-02 Commvault Systems, Inc. Deriving encryption rules based on file content
WO2016086945A1 (en) * 2014-12-04 2016-06-09 Chemometec A/S Image cytometer implementation
EP3859425B1 (en) 2015-09-17 2024-04-17 S.D. Sight Diagnostics Ltd. Methods and apparatus for detecting an entity in a bodily sample
US9735035B1 (en) 2016-01-29 2017-08-15 Lam Research Corporation Methods and apparatuses for estimating on-wafer oxide layer reduction effectiveness via color sensing
EP3223019B1 (en) * 2016-03-22 2021-07-28 Beckman Coulter, Inc. Method, computer program product, and system for establishing a sample tube set
CA3018536A1 (en) 2016-03-30 2017-10-05 S.D. Sight Diagnostics Ltd Distinguishing between blood sample components
US11307196B2 (en) 2016-05-11 2022-04-19 S.D. Sight Diagnostics Ltd. Sample carrier for optical measurements
WO2017195208A1 (en) 2016-05-11 2017-11-16 S.D. Sight Diagnostics Ltd Performing optical measurements on a sample
US20180077242A1 (en) * 2016-09-09 2018-03-15 Andrew Henry Carl Network communication technologies for laboratory instruments
US10935779B2 (en) * 2016-10-27 2021-03-02 Scopio Labs Ltd. Digital microscope which operates as a server
MX2019005731A (en) * 2016-11-18 2019-10-21 Cepheid Sample processing module array handling system and methods.
WO2018146699A1 (en) * 2017-02-07 2018-08-16 Shilps Sciences Private Limited A system for microdroplet manipulation
US11276163B2 (en) 2017-05-02 2022-03-15 Alvitae LLC System and method for facilitating autonomous control of an imaging system
EP3422288B1 (en) 2017-06-26 2020-02-26 Tecan Trading Ag Imaging a well of a microplate
CN107818559B (en) * 2017-09-22 2021-08-20 太原理工大学 Crystal inoculation state detection method and crystal inoculation state image acquisition device
AU2018369859B2 (en) 2017-11-14 2024-01-25 S.D. Sight Diagnostics Ltd Sample carrier for optical measurements
US10456788B2 (en) * 2018-01-26 2019-10-29 Yury Sherman Apparatus for disruption of cell and tissue samples in multi-well plates
EP3575742B1 (en) * 2018-05-29 2022-01-26 Global Scanning Denmark A/S A 3d object scanning using structured light
EP4321862A2 (en) * 2018-08-07 2024-02-14 BriteScan, LLC Portable scanning device for ascertaining attributes of sample materials
CN112912781B (en) * 2018-08-29 2022-11-29 艾塔鲁玛公司 Illuminated display as an illumination source for microscopy
US11010591B2 (en) * 2019-02-01 2021-05-18 Merck Sharp & Dohme Corp. Automatic protein crystallization trial analysis system
US20200271682A1 (en) * 2019-02-27 2020-08-27 Alpha Space Test and Research Alliance, LLC Systems and Methods for Environmental Factor Interaction Characterization
CN111024696B (en) 2019-12-11 2022-01-11 上海睿钰生物科技有限公司 Algae analysis method
US11379697B2 (en) 2020-05-20 2022-07-05 Bank Of America Corporation Field programmable gate array architecture for image analysis
US11295430B2 (en) 2020-05-20 2022-04-05 Bank Of America Corporation Image analysis architecture employing logical operations
DE102021112938A1 (en) * 2021-05-19 2022-11-24 Bmg Labtech Gmbh microplate reader
CN113340904A (en) * 2021-06-01 2021-09-03 贵州中烟工业有限责任公司 Method for detecting shrinkages of tobacco flakes
CN113963513A (en) * 2021-10-13 2022-01-21 公安部第三研究所 Robot system for realizing intelligent inspection in chemical industry and control method thereof
GB2613008A (en) 2021-11-19 2023-05-24 Agilent Technologies Inc Object handler in particular in an analytical system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6267722B1 (en) * 1998-02-03 2001-07-31 Adeza Biomedical Corporation Point of care diagnostic systems

Family Cites Families (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5419904Y2 (en) * 1971-11-29 1979-07-20
JPS5126027A (en) * 1974-08-27 1976-03-03 Canon Kk
JPS5136934A (en) * 1974-09-24 1976-03-29 Canon Kk
US4199013A (en) * 1977-04-01 1980-04-22 Packard Instrument Company, Inc. Liquid sample aspirating and/or dispensing system
US4422151A (en) * 1981-06-01 1983-12-20 Gilson Robert E Liquid handling apparatus
US4609017A (en) * 1983-10-13 1986-09-02 Coulter Electronics, Inc. Method and apparatus for transporting carriers of sealed sample tubes and mixing the samples
US4815845A (en) * 1986-04-16 1989-03-28 Westinghouse Electric Corp. Axial alignment aid for remote control operations and related method
US5105424A (en) * 1988-06-02 1992-04-14 California Institute Of Technology Inter-computer message routing system with each computer having separate routinng automata for each dimension of the network
GB8816982D0 (en) 1988-07-16 1988-08-17 Probus Biomedical Ltd Bio-fluid assay apparatus
US5468110A (en) 1990-01-24 1995-11-21 Automated Healthcare, Inc. Automated system for selecting packages from a storage area
US5199840A (en) * 1990-08-01 1993-04-06 John Castaldi Automated storage and retrieval system
JPH04216886A (en) * 1990-12-17 1992-08-06 Lintec Corp Self-adhesive sheet resistant to blistering
GB2269473A (en) * 1992-08-08 1994-02-09 Ibm A robotic cassette transfer apparatus
WO1994011489A1 (en) * 1992-11-06 1994-05-26 Biolog, Inc. Testing device for liquid and liquid suspended samples
JP3314440B2 (en) * 1993-02-26 2002-08-12 株式会社日立製作所 Defect inspection apparatus and method
US5614129A (en) * 1993-04-21 1997-03-25 California Institute Of Technology Potassium lithium tantalate niobate photorefractive crystals
US5539975A (en) * 1993-09-08 1996-07-30 Allen-Bradley Company, Inc. Control system and equipment configuration for a modular product assembly platform
US5544256A (en) * 1993-10-22 1996-08-06 International Business Machines Corporation Automated defect classification system
US5552890A (en) * 1994-04-19 1996-09-03 Tricor Systems, Inc. Gloss measurement system
US6800452B1 (en) * 1994-08-08 2004-10-05 Science Applications International Corporation Automated methods for simultaneously performing a plurality of signal-based assays
US5557097A (en) * 1994-09-20 1996-09-17 Neopath, Inc. Cytological system autofocus integrity checking apparatus
US6226032B1 (en) * 1996-07-16 2001-05-01 General Signal Corporation Crystal diameter control system
JPH1042204A (en) * 1996-07-25 1998-02-13 Hitachi Ltd Video signal processor
US5921739A (en) * 1997-02-10 1999-07-13 Keip; Charles P. Indexing parts tray device
US5985214A (en) * 1997-05-16 1999-11-16 Aurora Biosciences Corporation Systems and methods for rapidly identifying useful chemicals in liquid samples
US6529612B1 (en) * 1997-07-16 2003-03-04 Diversified Scientific, Inc. Method for acquiring, storing and analyzing crystal images
DE69823116D1 (en) * 1997-08-05 2004-05-19 Canon Kk Image processing method and device
US5961716A (en) * 1997-12-15 1999-10-05 Seh America, Inc. Diameter and melt measurement method used in automatically controlled crystal growth
CA2315809C (en) * 1997-12-23 2014-06-03 Dako A/S Cartridge device for processing a sample mounted on a surface of a support member
US6175652B1 (en) * 1997-12-31 2001-01-16 Cognex Corporation Machine vision system for analyzing features based on multiple object images
US6455861B1 (en) * 1998-11-24 2002-09-24 Cambridge Research & Instrumentation, Inc. Fluorescence polarization assay system and method
US6271022B1 (en) * 1999-03-12 2001-08-07 Biolog, Inc. Device for incubating and monitoring multiwell assays
US6368475B1 (en) * 2000-03-21 2002-04-09 Semitool, Inc. Apparatus for electrochemically processing a microelectronic workpiece
JP2000333905A (en) * 1999-05-31 2000-12-05 Nidek Co Ltd Ophthalmic device
US6788411B1 (en) * 1999-07-08 2004-09-07 Ppt Vision, Inc. Method and apparatus for adjusting illumination angle
US6203082B1 (en) * 1999-07-12 2001-03-20 Rd Automation Mounting apparatus for electronic parts
US6360792B1 (en) * 1999-10-04 2002-03-26 Robodesign International, Inc. Automated microplate filling device and method
WO2001061890A1 (en) * 2000-02-17 2001-08-23 Lumenare Networks A system and method for remotely configuring testing laboratories
US6701845B2 (en) * 2000-03-17 2004-03-09 Nikon Corporation & Nikon Technologies Inc. Print system and handy phone
JP2001284416A (en) * 2000-03-30 2001-10-12 Nagase & Co Ltd Low temperature test device
US7352889B2 (en) * 2000-10-30 2008-04-01 Ganz Brian L Automated storage and retrieval device and method
US6637473B2 (en) * 2000-10-30 2003-10-28 Robodesign International, Inc. Automated storage and retrieval device and method
US6985616B2 (en) * 2001-10-18 2006-01-10 Robodesign International, Inc. Automated verification and inspection device for sequentially inspecting microscopic crystals
US20020102149A1 (en) 2001-01-26 2002-08-01 Tekcel, Inc. Random access storage and retrieval system for microplates, microplate transport and micorplate conveyor
US6627461B2 (en) * 2001-04-18 2003-09-30 Signature Bioscience, Inc. Method and apparatus for detection of molecular events using temperature control of detection environment
CA2764307C (en) * 2001-06-29 2015-03-03 Meso Scale Technologies, Llc. Assay plates, reader systems and methods for luminescence test measurements
DE10157121A1 (en) 2001-11-21 2003-05-28 Richard Balzer Dynamic storage and material flow system has part systems coupled at one or more points
US6860940B2 (en) * 2002-02-11 2005-03-01 The Regents Of The University Of California Automated macromolecular crystallization screening
ITMO20020076A1 (en) 2002-03-29 2003-09-29 Ronflette Sa AUTOMATED WAREHOUSE
US6871922B1 (en) * 2002-10-28 2005-03-29 Feliks Pustilnikov Rotating shelf assembly
GB0415307D0 (en) 2004-07-08 2004-08-11 Rts Thurnall Plc Automated store

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6267722B1 (en) * 1998-02-03 2001-07-31 Adeza Biomedical Corporation Point of care diagnostic systems

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2703871A3 (en) * 2005-05-25 2014-09-03 Massachusetts Institute Of Technology Multifocal scanning microscopy systems and methods
RU2799730C1 (en) * 2022-11-15 2023-07-11 Общество с ограниченной ответственностью "Норгау Лабс" Device for reciprocating linear movement of the measuring microscope

Also Published As

Publication number Publication date
WO2004071067A2 (en) 2004-08-19
WO2004069409A3 (en) 2009-04-02
WO2004071067A3 (en) 2005-01-27
WO2004069984A3 (en) 2005-05-26
US20040256963A1 (en) 2004-12-23
WO2004070653A3 (en) 2005-01-06
US20040218804A1 (en) 2004-11-04
US20040260782A1 (en) 2004-12-23
US20040253742A1 (en) 2004-12-16
US7596251B2 (en) 2009-09-29
WO2004069984A2 (en) 2004-08-19
WO2004069409A2 (en) 2004-08-19

Similar Documents

Publication Publication Date Title
US20040218804A1 (en) Image analysis system and method
JP6437947B2 (en) Fully automatic rapid microscope slide scanner
US7433025B2 (en) Automated protein crystallization imaging
EP2577602B1 (en) System and method to determine slide quality of a digitized microscope slide
JP2017194700A5 (en)
JP2017194699A5 (en)
JP2005520174A5 (en)
EP2169379A2 (en) Sample imaging apparatus
Dolleiser et al. A fully automated optical microscope for analysis of particle tracks in solids
JP2005031664A (en) Method for operating laser scanning type microscope
KR102010819B1 (en) Apparatus for capturing images of blood cell and image analyzer with the same

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase