WO1999067739A1 - Three dimensional dynamic image analysis system - Google Patents

Three dimensional dynamic image analysis system Download PDF

Info

Publication number
WO1999067739A1
WO1999067739A1 PCT/US1999/013193 US9913193W WO9967739A1 WO 1999067739 A1 WO1999067739 A1 WO 1999067739A1 US 9913193 W US9913193 W US 9913193W WO 9967739 A1 WO9967739 A1 WO 9967739A1
Authority
WO
WIPO (PCT)
Prior art keywords
digitized optical
optical sections
digitized
pixels
wrap
Prior art date
Application number
PCT/US1999/013193
Other languages
French (fr)
Inventor
David R. Soll
Edwin R. Voss
Original Assignee
University Of Iowa Research Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Iowa Research Foundation filed Critical University Of Iowa Research Foundation
Priority to AU44361/99A priority Critical patent/AU4436199A/en
Publication of WO1999067739A1 publication Critical patent/WO1999067739A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the invention relates generally to motion analysis, and more specifically, to a three
  • An electronic signal corresponding to the images is input into a digitizer which identifies the coordinates of the periphery of the mobile object in each of the images.
  • a digital processor processes the contour information, and a computer controlled by a software program having image processing and graphics capabilities calculates a plurality of parameters representative of the shape and motion of the object.
  • the output from the computer may be displayed in a graphical representation, tabular form, in the formation of animations on a monitor, in a hard copy print out of tables, or animations and other graphical representations in two dimensions.
  • Such a system lacks the ability to fully capture every aspect of the dynamic morphology of a moving object.
  • the specification of the invention includes a microfiche appendix submitted according to 37 C.F.R. ⁇ 1.96 of twenty-two (22) microfiche, compnsing a total of 2139 frames
  • the appendix is a pnntout of the source code of the computer program which controls operation of the present invention.
  • An object of the present invention compnses providing a method for the three dimensional elapsed time analysis of the motility and morphology of a moving object.
  • a further object of the present invention compnses providing a system for the three dimensional elapsed time analysis of the motility and morphology of a moving object.
  • a microscope is used to optically section an object at a plurality of focal depths over a plurality of time penods
  • the optical sections are digitized, and a tag allows identification of at least the time and the focal depth of each digitized optical section
  • Image processing creates an outline of the penphery of the object through application of a complexity threshold algonthm
  • a plurality of parameters representing the motility and morphology of the object are calculated
  • a three dimensional graphical representation of the object is reconstructed form the plurality of digitized optical sections for computenzed viewing
  • Fig 1 is a component diagram of a 3-D digital image analysis system
  • Fig 2 is a top plan view of the optical sectioning and outlining of a motile object at a plurality of focal depths
  • Fig 3 is a block diagram of the outlining process
  • Fig 4 is a top plan view of an optical section of an object and the digitized outline of the object
  • Fig 5 is a dilation of the digitized outline of Fig 4
  • Fig 6 is an erosion of the digitized outline of Fig 5
  • Fig 7 is a top plan view of a digitized optical section
  • Fig 8 is a display from a graphical user interface depicting a plurality of parameter representing the motility and morphology of an object
  • Fig 9 is a top plan view of a plurality of optical sections and their corresponding outlines at a plurality of focal depths
  • Fig 10a is a top plan view of a plurality of digitized optical sections at a plurality of focal depths, with the out of focus backgrounds subtracted out
  • Fig. 10b is a stacked image reconstruction of the plurality of optical sections shown in
  • Fig. 10a viewed from a plurality of attitudes.
  • Fig. 10c is a faceted image reconstruction of the optical sections of Fig. 10a viewed at a plurality of attitudes.
  • Fig. 11 is a top plan view of outlines of a plurality of digitized optical sections stacked with contour levels corresponding to focal depths.
  • Fig. 12 is a stacked image reconstruction with a slotted interior viewed over several periods of time, with a faceted image reconstruction of the portion of the object corresponding to the slot.
  • Fig. 13 is a top plan view of an outline of a digitized optical section and a slot.
  • Fig. 14 is a slotted faceted image reconstruction of an object over several periods of time.
  • Fig. 15 is an elevation view of the outline of a plurality of digitized optical sections with a lateral indentation.
  • Fig. 16 is a component diagram of an alternative 3-D image analysis system.
  • Fig. 17 is an illustration of turning angles used to compute convexity and concavity.
  • Fig. 18 is a graph of speed versus time.
  • Fig. 19 is a top plan view of a digitized optical section and outline.
  • Fig. 20 is a top plan view of a digitized optical section and outline.
  • Fig. 21 is a top plan view of a digitized optical section and outline.
  • Fig. 22 is a top plan view of a digitized optical section and outline.
  • Fig. 23 is a top plan view of a digitized optical section and outline.
  • Fig. 24 is a top plan view of a digitized optical section and outline.
  • Fig. 1 shows a 3-D digital image analysis system (DIAS) 10.
  • the 3- D DIAS System 10 comprises an inverted compound microscope 12 fitted with differential interference contrast (DIC) optics, a camera 14, a VCR 16, a frame grabber 18, a character generator 20, a computer 22 having a serial port 24, a computer display terminal 26, and a key board 28.
  • a stepper motor 13 attaches to a focus knob 11 of the DIC microscope 12.
  • the stepper motor 13 comprises a computer programmed MicroStepZ3D stepping motor.
  • the camera 14 configures for NTSC video, and in the preferred embodiment of the invention comprises a cooled CCD camera which can handle 30 frames per second without motion blurring.
  • the VCR 16 comprises a conventional high quality tape recorder or video disk system, equipped with a frame grabber 18.
  • the frame grabber 18 configures for use with a Macintosh operating system based computer capable of grabbing 30 frames per second of at least a 3/4 size image and storing the results as a QuickTime movie.
  • the computer 22 comprises a Macintosh computer, in particular a power PC based computer with a core processor speed of at least 225 megahertz, a two gigabyte hard drive, and forty- eight megabytes of RAM.
  • the computer display terminal 26 is capable of pseudo three dimensional viewing through stereo graphics "crystal eyes" 3-D display screen with special glasses 29, or at least a fifty percent reduction in resolution and a standard color display with inexpensive red blue stereo glasses.
  • the computer 22 can comprise any number of types and varieties of general purpose computers, or a digital camera with a direct link to the computer 22 could replace the camera 14 and VCR 16.
  • the preferred embodiment of the present invention utilizes differential interference contract microscopy.
  • DIC optics has the advantage of high resolution microscopy, without the use of dyes or lasers, which may lead to the premature death of the organisms due to increases in heat and the effects of phototoxicity. Premature death leads to shortened periods of motility and dynamic morphology for analysis.
  • confocal optical systems that use lasers, typically require application of stains or dyes to the motile objects. This will kill a living object which eliminates the possibility of analyzing the objects motility and morphology.
  • Deconvolution methods involve phase or standard light microscope images, and presently do not exhibit sufficient optical quality to practice the present invention.
  • DIC microscopy comprise the preferred method of practice of the present invention, the possibility exists to use other microscopy techniques despite their drawbacks.
  • the computer 22 performs the methods of the present invention under computer control through the use of programming means in the form of a 3-D DIAS software package (see microfiche appendix). The method begins by placing the sample object on the DIC microscope 12.
  • the object Since typically the object comprises a living cell, the object is contained in a fluid filled viewing chamber (not shown). Accordingly, the supporting materials must be the correct width and chemical nature (glass vs. plastic vs. quartz) to be compatible with the focal depth and the light transmission for the particular objects used. Magnification must be selected which is compatible with the speed of cellular translocation, over a period of recording, and most importantly compatible with the size of the cell.
  • the stepper motor 13 must be programmed so that one cycle spans the desired Z-axis focal depth.
  • the method comprises optically sectioning an object at a plurality of focal depths over a first period of time.
  • a scan rate must be chosen. A two second scan in either direction up or down including 30 optical sections is more than sufficient for the analysis of cells moving at velocities of seven to twenty microns per minute. This rate results in relatively small errors due to cell movement during the time of sectioning.
  • a fast rate and a fast frequency of scanning would include sequential up and down scans each including 30 frames over one second through ten microns.
  • the optical sections can be read directly into the frame grabber 18, it is more effective initially to make a video recording or tape for several reasons.
  • image acquisition on video tape is relatively limitless and inexpensive and, therefore, will accommodate extended recording periods. Real time frame grabbing will have storage limits.
  • the image acquisition on tape allows the character generator 20 and the stepper motor 13 to notate each video frame for time, height, and direction of scan.
  • the image from the camera 14 transfers to the VCR 16 then to frame grabber 18, and into the computer 22 via serial port 24.
  • This process repeats for a plurality of focal depths over a first period of time.
  • the focal depth varies through movement of the step remoter 13 fixed to the focus knob 11 of the DIC microscope 12.
  • the frame grabber 18 digitizes each of the plurality of optical sections and then transfers the data to the computer 22.
  • the stepper motor 13 and the character generator 20 transfer information to the computer 22, that associates a tag with each of the plurality of digitized optical sections
  • the tag allows identification of at least the time and the focal depth corresponding to each of the plurality of digitized optical sections
  • the data transfers into the computer 22, preferably a Macintosh computer, and results m the creation of a QuickTime movie
  • the present invention also works with PICT stacks in addition to QuickTime movies Digitized optical sections can be read into the computer 22 at a maximum rate of thirty frames per second or if desired a lower rate such as ten or twenty frames per second Those of ordinary skill in the art will appreciate the applicability of the present invention to even higher rates of capture, as the technology develops A twenty mmute segment read in at thirty frames per second will take more than five hundred megabytes of storage on a hard disk
  • the QuickTime movie is synchronized to the automatic up and down scans and the time of the scans are recorded in a synchronization file in the computer 22
  • a user can reduce the size of the optical section to a specific window which contains only a portion of interest, thereby reducing the amount of digitized information.
  • the 3-D DIAS movie allows for frame averaging to reduce background noise and accentuate the periphery of the object. For instance, at a rate of thirty frames per second, every three frames can be averaged in an overlapping fashion, resulting in the second to twenty-ninth optical section averaged with the two neighboring sections, and the two end sections (one and thirty) average with only one neighboring section.
  • Fig. 10a shows a portion of a set of twelve digitized optical sections 32 of a Dictyostelium amoebae at one micron increments taken in a two second period and averaged over three frames, providing in focus perimeters amenable to subsequent automatic outlining (see also Fig. 2).
  • the next step comprises outlining the periphery of the objects for each of the plurality of digitized optical sections 32.
  • Fig. 2 shows the before and after effect of outlining an object at a plurality of focal depths.
  • Fig. 2a shows the original digitized optical sections 32 of an object at twelve different focal depths
  • Fig. 2b shows the same digitized optical sections 32 with the corresponding outlines 38 included.
  • the outline 38 attempts to trace the circumference of the in focus portion of the object.
  • Fig. 2 shows that not only the size of the in focus portion of the object varies at different focal depths, but the surrounding background also varies. This comprises the significant challenge to the outlining process.
  • the boundary between the in focus portion and the out of focus portion represents a bright area
  • the boundary between the in focus and out of focus area represents a dark area.
  • Fig. 3 shows in block diagram form the theoretical steps of the outlining process. Those of ordinary skill in the art will appreciate the fact that the order of the steps depicted in Fig. 3 can vary without departing from the intended scope of the present invention, and in some cases the computer 22 can perform the steps simultaneously.
  • Fig. 3 shows a smooth image step 102. which normally occurs at the beginning of image processing, to prepare the digitized optical section 32 for the actual outlining. Smoothing tends to remove the jagged and rough edges, and reduces the overall contrast.
  • the smooth image step 102 involves standard smoothing techniques
  • the next step comprises the complexity threshold step 104.
  • Complexity in this case, is defined as the standard deviation from a mean pixel grayscale value within a 3x3 or 5x5 pixel neighborhood surrounding the pixel under analysis The neighborhood is referred to as a kernel Since the penmeter of a cell represents a boundary of high contrast, the standard deviation of the grayscale of a pixel at an edge, and the pixels on either side (inside and outside of the cell) will be high Therefore, the complexity will also be high In other words, for each of the digitized optical sections 32 the transition between the m focus region and the out of focus region is defined by an area of high grayscale contrast In this manner, examining a 3x3 or 5x5 kernel and calculating the standard deviation of the grayscales of the kernel allows for identifying the boundanes of the cell penphery for a particular digitized optical section 32 at a particular focal depth For each pixel, based on the pixel's corresponding kernel, a standard deviation representing the amount of grayscale vanation within the kernel is calculated
  • the particular digitized optical section 32 converts to an image where the background and the cell interior appears white and only the periphery of the object appears black.
  • the black areas then form the outline 38.
  • increasing the complexity threshold value will shrink or remove the outline 38, while lowering the complexity threshold value will increase the area of the outline 38.
  • Fig. 4a-b show two digitized optical sections 117, 118 in which application of the complexity threshold did not form complete outlines 120, 122.
  • the digitized optical section 117 appears in two sections with a fuzzy low contrast transition between the two. Therefore, application of a complexity threshold did not properly outline the transition area (see outline 117 of Fig. 4b).
  • the digitized optical section 118 shows that a portion of the periphery comprises a fuzzy low contrast region, which an application of the complexity threshold technique failed to fully outline (see outline 118 of Fig. 4a). Accordingly, the outlines 120, 122 in Fig. 4b require further image processing. To deal with the situation of incomplete and partial outlines the 3D-DIAS System 10 provides the ability to dilate, erode, and smooth the digitized optical sections 32. Referring again to Fig. 4, applying the complexity threshold step 104 to digitized optical section 117 produces outline 120.
  • Fig 4 shows that both the outlines 120, 122 do not completely enclose their respective objects 1 16, 118
  • the first step in completing the outlines 120, 122 compnses the dilate step 106 (Fig 3) Dilation involves selecting every pixel that surrounds a black pixel and converting that pixel to a grayscale of 0 (black)
  • Fig 5a shows the dilation process applied to the outlines 120.
  • dilations 124, 126 or a broader outline that fills in the gaps m the onginal outlines 120, 122 of Fig 4b
  • dilation involves adding the four honzontal and vertical neighbonng pixels for each pixel of the digitized outlines 120, 122 appeanng in Fig 4b
  • the dilation process fattens the object by the amount of dilation In this manner, the gaps that appeared in the ongmal outlines 120, 122 fill in
  • the outer penmeter of dilation 124 and dilation 126 are outlined creating a dilated outline 128 and a dilated outline 130 shown in Fig 5b
  • the 3D-DIAS System 10 utilizes additional image processing to smooth the black pixels remaining after the dilate step 106 Fig 3 shows the smooth outline step at 108
  • the smooth outline step 108 utilizes standard smoothing techniques
  • one smoothing technique involves converting the locations of all non- white pixels to a floating point number, and then averaging the pixel locations for a neighborhood Then, a pixel is added at the a location as close as possible to the average location This reduces the roughness of the outline 38
  • grayscale threshold step 110 can further enhance the image processing
  • the grayscale threshold step 110 merely removes pixels with grayscale values below the grayscale threshold value
  • grayscale typically vanes from 0 (white) to 255 (black)
  • the grayscale threshold can be expressed in a percent from 0% (white) to 100% (black) This step effectively reduces any remaining residual background areas
  • a further technique to solve the problem of residual background areas compnses application of a minimum pixel filter step 112
  • the minimum pixel filter step 112 searches for continuous black pixel regions where the number of pixels equals a number less than the minimum pixel filter value, and then removes these pixel regions This allows removal of small, high contrast regions appeanng in the background of the digitized optical section 32
  • the default for the minimum pixel filter value comprises twenty-five
  • most of the outlined background consists of groups of pixels of between five and ten pixels
  • a minimum pixel filter value of between five and ten will allow for the removal of these unwanted background objects without mterfenng with the outline 38 of the digitized optical section 32
  • Fig 3 shows a maximum pixel filter step 114
  • the maximum pixel filter step 1 14 allows for the elimination of large unwanted areas that appear withm the digitized optical section 32
  • the maximum pixel filter step 1 14 selects those regions of the digitized optical section 32 with continuous pixel groupings above the maximum pixel filter size
  • the eroded outlines 132, 134 now more accurately reflect the penphery of the object in the digitized optical sections 117, 118.
  • the dilate default equals three, since the erode default equals two and the smooth outline default equals one
  • Fig. 9 shows a further illustration of the result of outlining.
  • Fig. 9 shows a plurality of digitized optical sections 32, each taken at a different focal depth, and the associated outline 38 of each digitized optical section 32 In this case, not only do the outlines 38 change in size and shape, but some of the outlines 32 contain more than one distinct circumscnbed area.
  • Figs 19-21 show the effect of varying the number of times the smooth image step 102 is performed.
  • the smooth image step 102 is performed once
  • Fig. 20 the smooth image step 102 is performed twice
  • m Fig. 21 the smooth image step 102 is performed four times
  • Increasing the smoothing of the image effectively reduces the sharpness of the image, and, therefore, reduces the complexity of the digitized optical section 38. This reduces the area of the outline 38 since the smoothing reduces the contrast of the digitized optical section 38.
  • Figs. 22-24 show the effect of diffenng combinations of the dilate step 106 and the erode step 115.
  • the smooth image step 102 is performed once, and the smooth outline step 108 is performed three times.
  • the dilate step 106 is performed twice, and the erode step 115 is not performed.
  • the dilate step 106 is performed three times, and the erode step 115 is performed six times.
  • the dilate step 106 is performed three times, and the erode step 115 is performed eight times.
  • the overall effect shown in Figs. 22- 24 comprises increasing the gap between the number of dilate steps 106 and the number of erode steps 115, which in general reduces the size of the outline 38.
  • Fig. 7 shows an outline 38 with a lateral indentation 78.
  • the outline 38 represents the ideal, or perfect, outline 38.
  • Applying the above outlining parameters could result in filing in the lateral indentation 78 with outline 76 (shown in phantom).
  • the 3D-DIAS system 10 provides for the possibility of manual outlining.
  • the next step comprises reconstructing from the plurality of digitized optical sections 32 a three dimensional graphical reconstruction of the object for computerized viewing.
  • the 3D-DIAS System 10 contemplates two types of reconstructions.
  • the stacked image reconstruction 34 essentially comprises stacking each of the digitized optical sections 32, wherein the focal depth of the digitized optical sections 32 translates into a height.
  • Fig. 10a shows a plurality of twelve digitized optical sections 32 each at a different focal depth.
  • the computer again under the control of programming means, constructs a stacked image reconstruction 34 by stacking each of the digitized optical sections 32 by height.
  • FIG. 10b shows the digitized optical sections from a 0° viewing attitude, with each digitized optical section labeled from one to twelve.
  • the digitized optical section 32 appearing in Fig. 10a (1) appears at the bottom of the stacked image reconstruction 34 shown in Fig. 10b at 0°
  • the digitized optical section 32 appearing in Fig. 10a (12) appears at the top of the same stacked image reconstruction 34.
  • the stacked image reconstruction 34 viewed from the 0° viewing attitude only displays a side view of each digitized optical section 32, but clearly shows the height spacing between each digitized optical section 32.
  • Each stacked image reconstruction 34 displays only that portion of each of the plurality of digitized optical sections 32 defined by the outline 38 of the digitized optical sections 32, and visible from the particular viewing attitude.
  • the 30° stacked image reconstruction 34 of Fig. 10b shows the digitized optical sections 32 of Fig. 10a viewed from a viewing attitude of 30° above the horizontal. In this manner, the edges of the digitized optical sections 32 overlap each other clearly showing the three-dimensional nature of the stacked image reconstruction 34.
  • the stacked image reconstructions 34 essentially comprises overlapping a series of two dimensional digitized optical sections 32, and then displaying only that portion of the digitized optical sections 32 not overlapped or hidden by an underlying digitized optical section 32.
  • each subsequent digitized optical section 32 stacks over the top of the previous digitized optical section 32
  • the computer assigns a grayscale value to each point of each of the plurality of digitized optical sections 32, with the grayscale of each digitized optical section 32 decreasing by height
  • Fig 10b also shows the same stacked image reconstruction 34 displayed from a 60° viewing attitude and a 90° viewing attitude, which expose for viewing different portions of the digitized optical sections 32
  • Fig 10c shows a faceted image reconstruction 36 of the plurality of digitized optical sections 32 appeanng in Fig 10a
  • the facet image reconstruction method begins by constructing a top wrap 84 and a bottom wrap 86 (see also Fig 15 )
  • the top wrap 84 is essentially identical to the stacked image reconstruction 34 shown in Fig 10b viewed from a 90° attitude
  • the bottom wrap 86 consists of the same stacked image reconstruction 34 viewed from a minus 90° attitude
  • the faceted image reconstruction 36 consists of dividing the stacked image reconstruction 34 into a top wrap 84 and a bottom wrap 86
  • the webbmg of the fishing net forms facets 94 that define the outer penmeter of the top wrap 84
  • the process repeats for the bottom wrap 86
  • the last step involves joining the top wrap 84 and the bottom wrap 86 at a seam This process essentially creates a mathematical 3-D model of the enclosed object
  • This process essentially creates a mathematical 3-D model of the enclosed object
  • each identified pixel is assigned an X, Y and Z coordinate, where the X and Y coordinate correlates to the pixels row and column and the Z coordinate represents the height of the location of the pixel in the faceted image reconstruction 36. If the particular pixel 95 happens to lie exactly on the outline 38 of a particular digitized optical section 32 (see Fig. 11), then the Z coordinate equals the height of that particular digitized optical section 32.
  • the pixels that lie directly on an outline 38 of a digitized optical section 32 essentially lie exactly on a contour line. This allows for quickly determining the Z coordinate for these particular pixels.
  • the Z coordinate is assigned a height based on a weighted average.
  • the Z coordinate of these pixels can be designated an easily recognizable arbitrary number like one million.
  • Fig 11 shows a plurality of digitized optical sections 32 stacked according to height, where the height conesponds to the focal depth of the particular digitized optical section 32. In this manner, the plurality of digitized optical sections 32 take on the look of a contour map Fig.
  • FIG. 1 1 shows a pixel 96 located between a zero micron contour level 66, a plus one micron contour level 68, and a plus two micron contour level 70. Since the pixel 96 does not he directly on any of the outlines 38, the Z coordinate of the pixel 96 must equal a value somewhere between the heights of the surrounding outlines 38 of the digitized optical sections 32.
  • One method to calculate the Z coordinate value of the pixel 96 involves drawing a plurality of rays from the pixel 96 to the surrounding outlines 38 and weighting the shorter rays more than the longer rays.
  • Fig. 11 shows eight rays extending at 45° angles from the pixel 96.
  • a first ray 50, a second ray 52, a third ray 54, a fourth ray 56, a fifth ray 58, and a sixth ray 60 all extend from the pixel 96 to the zero micron contour level 66.
  • a seventh ray 62 extends from the pixel 96 to the plus one micron contour level 68, and an eighth ray 64 extends from the pixel 96 to the plus two micron contour level 70.
  • each of the eight rays 50-64 extends a certain length of L1-L8, and contacts an outline 38 of a particular height of H1-H8.
  • bottom wrap 86 The only difference between the bottom wrap 86 and the top wrap 84
  • the top wrap 84 uses a
  • the top wrap 84 views the stacked image reconstruction 34
  • the vertices of the facets 94 have X, Y and Z coordinates defined by
  • FIG. 15 shows the facets 94
  • Fig. 10c provides a better illustration but the facets are too numerous to provide for
  • Fig. 10c shows the faceted image reconstruction 36 of the digitized optical
  • the faceted image reconstruction 36 of Fig 10c viewed at an attitude of 90° shows the facets 94 of the top wrap 84 Dividing the top wrap 84 and the bottom wrap 86 according to vertical and honzontal contour lines creates the facets 94 at the intersections of the contour lines Therefore, the penmeter of each facet 94 is defined by a pixel with a X, Y and Z coordinate
  • evaluating the X, Y and Z coordinates of the pixels of the top wrap 84 and the bottom wrap 86 allows identification of a seam which defines the intersection of the facets 94 of the top wrap 84 and the facets 94 of the bottom wrap 86 Joining the top wrap 84 and the bottom wrap 86 at the seam allows creation of the faceted image reconstruction 36. and repeating this process over several penods of time allows creation of a three dimensional elapsed time faceted image reconstruction
  • Fig 15 shows a stacked image reconstruction 34 of a plurality of a digitized optical sections 32 which contains a pronounced lateral indentation 78
  • the faceted image reconstruction method will not accurately descnbe the area of indentation Digitized optical sections 32 above and below the lateral indentation 78 overhang the lateral indentation 78
  • the fishing net analogy casting a net over the stacked image reconstruction 34 will not completely define the surface area defined by the digitized optical sections 32
  • the solution to this problem involves identifying the lateral indentation 78 at the maximum point of advance, and then subdividing the object at the maximum point of advance creating a top partial wrap 80 and a bottom partial wrap 82.
  • the method proceeds by performing the aforementioned steps of creating the faceted image reconstruction 36 on the top partial wrap 80, and the bottom partial wrap 82.
  • Identification of the lateral indentation 78 generally requires manual intervention, wherein the necessity of identifying the lateral indentation 78 will depend on the particular circumstances and the contour of the particular object involved.
  • Joining the top partial wrap 80 and the bottom partial wrap 82 at their seam results in creation of a partial faceted image reconstruction 98.
  • the partial faceted image reconstruction 98 clearly shows the lateral indentation 78.
  • the process of creating the partial faceted image reconstruction 98 merely involves dividing either the top wrap 84 or the bottom wrap 86 at the lateral indentation 78, and then separately processing the top partial wrap 80, and the bottom partial wrap 82.
  • This process can repeat in order to define successive lateral indentations.
  • Some situations may require tracking the motility and morphology of an interior portion of the moving object.
  • Fig. 12 shows an example of such a situation.
  • Fig. 12 shows a stacked image reconstruction 34 of a cell over a plurality of time periods. Each of the stacked image reconstructions 34 of the cell contain a slot 40, representing the location of the nucleus of the cell.
  • Fig. 13 shows a single digitized optical section 32 with a slot 40, and a slice 74 which divides the digitized optical section 32 and the slot 40 into two portions.
  • Creating the stacked image reconstruction 34 of Fig. 12 involves outlining each digitized optical section 32, identifying a slot 40 in each of the digitized optical sections 32, and dividing each slot 40 of each digitized optical section 32 at a slice 74.
  • the stacked image reconstruction 34 involves stacking one of the portions of the digitized optical sections 32 defined by the slice 74. This allows viewing both the stacked image reconstruction 34 and the slot 40 in the same image.
  • Outlining the slot 40 can involve the aforementioned automatic outlining process; or can proceed manually.
  • Fig. 14 shows an example of a plurality of faceted image reconstructions 36 over a period of time including a first faceted slot 44, a second faceted slot 46, and a third faceted slot 48.
  • Fig. 14 shows the faceted image reconstruction 36 at seven different time periods, and from two different viewing attitudes. The top group of faceted image reconstructions 36 appears at a 0° viewing attitude, while the bottom group of faceted image reconstructions 36 appears at a 90° viewing attitude. In this manner, Fig. 14 shows that the method of the present invention can depict the motility and mo ⁇ hology of not only a moving object, but of selected portions of the moving object.
  • the reconstruction methods of the present invention provides a three dimensional mathematical model for computing motility and dynamic mo ⁇ hology parameters.
  • Fig. 8 shows an example of a graphical user interface screen to allow a user to select from a plurality of parameters representing the motility and mo ⁇ hology of an object. Calculation of parameters representing the motility and dynamic mo ⁇ hology of an object requires defining the following notation: Notation:
  • F equals the total number of digitized optical sections involved in the calculation, while “f “equals the digitized optical section subject to the current calculation;
  • X[fj,Y[fj” equals the coordinates of the centroid of digitized optical section f, where 1 ⁇ f ⁇ F;
  • I equals the centroid increment and defines what previous and subsequent mean (for example a centroid increment of I means the centroid based calculations of the N'th digitized optical section use the N-I previous digitized optical section and the N+I subsequent digitized optical section), increasing the centroid increment tends to smooth the particular value, and reduces sudden uneven jumps;
  • n equals the number of pixels in a digitized optical section's outline, where Pj . . .
  • P N represents the n individual pixels, and where P xn and P yn comprises the X and Y coordinates of the n'th pixel; "frate” equals the number of digitized optical sections per unit of time; "scale” equals the scale factor in distance units per pixel; “sqrt[number]” returns the square root of the number;
  • angle[X, Y] returns the angle in degrees between a vector with origin (X, Y) and the
  • NAN equals NOT A NUMBER, an arbitrary large designation (1,000,000 for example) generally used to indicate a non-processed value
  • Speed[f] (scale)(frate)(sqrt[ ((X[f+I] - X[f-I])/I) 2 + ((Y[f+I] - Y[f- I])/I) 2
  • Speed[f] (scale)(frate)(sqrt[ ((X[f+I] - X[f])/I) 2 + ((Y[f+I] - Y[fJ)/I) 2 For(f-I> l)and(f+I>F),
  • Speed[f] (scale)(frate)(sqrt[ ((X[fj - X[f-I])/I) 2 + ((Y[f] - Y[f-I])/I) 2 For all other f,
  • Persis[f] Speed[f]/((1 + 100/360)(DirChg[f])) Note - Persistence is essentially speed divided by the direction change (converted from degrees to grads). One is added to the denominator to prevent division by 0. If an object is not turning the persistence equals the speed.
  • the second end point of the major axis comprises the pixel furthest from the first end point of the major axis.
  • the major axis equals the chord connecting the first end point to the second end point.
  • Tiltff] angle in degrees between the major axis and the horizontal axis
  • MeanWidfh[f] Area[f]/MaxLen[fj Maximum Width:
  • MaxWid[f] length of the longest chord pe ⁇ endicular to the major axis Central Width:
  • XWid[f] width of the smallest rectangle enclosing the digitized optical section's outline
  • MaxLen[f] length of the major axis
  • YWid[f] height of the smallest rectangle enclosing the digitized optical section's outline
  • XS Wid[f] the length of the longest chord parallel to the XWid[f]
  • YSW ⁇ d[fJ the length of the longest chord parallel to the YWidffJ
  • the penmeter equals the penmeter of the outline of the digitized
  • Roundness is a measure (in percent) of how efficiently a given amount
  • the predicted volume Vol[f] is the volume of the ellipsoid, with circular cross-section, having length MaxLen[f] and width MeanWid[fj. Predicted Surface:
  • the predicted surface area Sur[f] equals the surface area of the ellipsoid, with circular cross-section, having length MaxLenff] and width
  • Convex[f] and Concav[f] requires drawing line segments connecting each vertices of the outline.
  • the angles of turning 116 from on segment to the next are measured (Fig. 17). Counter-clockwise turning represents a positive angle, while clockwise turning a negative angle. For a closed outline, these angles always add up to 360°. The procedure repeats for holes in the outline.
  • Convex[f] sum of the positive turning angles
  • Concav[f] abs[ sum of the negative turning angles]
  • Convex[f] -Concav[f] (360)(1 + Number of Holes).
  • Convexity and concavity measure the relative complexity of the shape of the outline. For example, the convexity of a circle equals 360 and the concavity equals 0. Positive and Negative Flow:
  • Positive flow essentially measures the amount of new area formed in a certain amount of time (or in the flow increment), expressed in percent.
  • negative flow measures the amount of area lost over the period of time designated by the flow increment in percent.
  • positive and negative flow measure the percent of area expansion and contraction of an
  • f the current frame and FI equal the flow increment.
  • A the interior of the f-FI outline, minus any holes
  • B the interior of the f th outline, minus any holes (with positive and negative flow undefined for f-FI ⁇ 1).
  • PosFlow[fJ (100)Area(P)/Area(A)
  • NegFlow[fJ (100)(Area(N)/Area(A)
  • An additional option for calculation of flow involves fixing the centroids over the flow increment. This aligns the B and A area so that the centroids overlap prior to computing flow, and subtracts out centroid movement from the shape change.
  • Sectors Sector Area, Sector Perimeter, Sector Positive Flow, and Sector
  • Negative Flow comprise derivative measurements of the respective standard parameters.
  • the sector measurements allow parametization of a subset, or sector, of a particular outline.
  • the user inputs the beginning and ending flow in degrees, and the flow range is divided into four sectors. For example, entering 0 and 360 will produce four sectors with sector 1 consisting of 0° to
  • 3D Volume This involves first converting each facet into a prism by extending the facet inward to the centroid. The volume then equals the sum of the volumes of each prism. 3D Height:
  • the first step comprises setting a 3D volume threshold in percent.
  • the 3D Bulk Height equals the 3D Height after eliminating a portion of the 3D Volume equal to the threshold percentage.
  • 3D Width The longest chord coming from the disc defined by all chords passing through the centroid and pe ⁇ endicular to the 3D Length.
  • Sphericity The longest chord coming from the disc defined by all chords passing through the centroid and pe ⁇ endicular to the 3D Length.
  • the 3D analog of roundness essentially a measurement of the efficiency of enclosing the 3D Volume with the 3D Area in percent.
  • Sphericity essential comprises an invariant ratio of the area to the volume.
  • the sphericity of a perfect sphere would equal 100%.
  • the Area of Projection equals the ratio to the length of the digitized optical section's outline with the greatest area to the length of the base digitized optical section's outline.
  • the parallel processors 90 can comprise power PC based Macintosh clones communicating over a vast ethernet network (two megabytes per second transfer rate) or the accelerated SCSI ports (ten megabytes per second). It is anticipated that by utilizing 225 megahertz power PC based computers connected by ethernet the 3D-DIAS System 10 can accomplish near real time reconstruction and analysis.
  • a distribution work station 88 controls the frame grabber 18.
  • the ten parallel processors 90 perform the steps of outlining, and an integration work station 92 integrates the information from the parallel processors 90 and generates the reconstructed images. The reconstructed image is then played as a dynamic 3D image with four superimposed mini screens which display selected parameters on the computer display terminal 26.
  • the parallel processing system utilizes software which includes a program for separating the digitized optical sections 32 between the parallel processors 90, and software for reintegrating the information from the parallel processors 90 in the integration work station 92.

Abstract

A microscope (12) is used to optically section an object at a plurality of focal depths over a plurality of time periods. The optical sections are digitized (18), and a tag allows identification of at least the time and the focal depth of each digitized optical section. Image processing creates an outline of the periphery of the object through application of a complexity threshold algorithm. A plurality of parameters representing the motility and morphology of the object are calculated. A three dimensional graphical representation of the object is reconstructed from the plurality of digitized optical sections for computerized viewing (26).

Description

THREE DIMENSIONAL DYNAMIC IMAGE ANALYSIS SYSTEM
Related Applications
The present application claims pπoπty based upon Provisional Application
60/051,095 field June 27, 1997
Background of the Invention
The invention relates generally to motion analysis, and more specifically, to a three
dimensional elapsed time system for the analysis of the motility and morphology of a moving
object
The analysis of the behavior of motile, living cells using computer assisted systems
compπses a crucial tool in understanding, for example, why cancer cells become metastic,
why HIV infected cells do not perform their normal functions, and the roles of specific
cytoskeletal and signaling molecules in cellular locomotion dunng embryonic development
and dunng cellular responses m the immune system Further, motion analysis systems have
been used to analyze the parameters of shape and motion of objects in a variety of diverse
fields For example, such systems have been used for analysis of such diverse dynamic
phenomena as the explosion of the space shuttle Challenger, car distortion, echocardiogiaphy,
human kinesiology, insect larvae crawling, sperm motility and bacteπal swimming, to
analyze cell movement and morphological change, to quantize shape changes of the
embryonic heart, to quantize breast movement for reconstructive surgery, and to analyze
human form and movement Often times the information required to analyze such systems
required manual gathenng For example, in analyzing embryonic heart action, a researcher would display an echocardiograph of a heart on a monitor and make measurements of the monitor using a scale, or the like, held up to the screen. The tedious and time consuming nature of these types of manual measurements severely limit the practicality of such an approach. A motion analysis system for the biological study of cell motility and morphology requires the use of state of the art videomicroscopy, image processing, and motion analysis systems. United States Patent No. 5,655,028 describes a computer assisted two dimensional system for analyzing the dynamic morphology of moving objects, and particularly of organisms on the cellular level. An electronic signal corresponding to the images, for example, from a video camera, is input into a digitizer which identifies the coordinates of the periphery of the mobile object in each of the images. A digital processor processes the contour information, and a computer controlled by a software program having image processing and graphics capabilities calculates a plurality of parameters representative of the shape and motion of the object. The output from the computer may be displayed in a graphical representation, tabular form, in the formation of animations on a monitor, in a hard copy print out of tables, or animations and other graphical representations in two dimensions. Such a system, however, lacks the ability to fully capture every aspect of the dynamic morphology of a moving object. For example, the analysis of the motion of certain types of amoebae and human leukocytes translocated across a substratum revealed that these objects changed area in a cyclical fashion when viewed in a two dimensional focal plane. In other words, portions of the cells go in and out of focus over time. Additionally, in some cases pseudopods can form off of the substratum. These observations indicate that only a three dimensional motion analysis system can fully capture the dynamic morphology of these types of moving objects.
Consequently, a need exists to develop a 3-D motion analysis system for captunng the motility and morphology of moving objects, particularly on the cellular level, that obtains optical sections of a moving object within a time interval short enough so that the motility of the object does not significantly affect each static 3-D reconstruction, to reconstruct a 3-D mathematical model of the image representative of the object within short time intervals, and to repeat the reconstruction process at short time intervals in order to achieve three dimensional elapsed time analysis of the motility and morphology of the moving object The specification of the invention includes a microfiche appendix submitted according to 37 C.F.R. §1.96 of twenty-two (22) microfiche, compnsing a total of 2139 frames The appendix is a pnntout of the source code of the computer program which controls operation of the present invention.
Summary of the Invention
An object of the present invention compnses providing a method for the three dimensional elapsed time analysis of the motility and morphology of a moving object.
A further object of the present invention compnses providing a system for the three dimensional elapsed time analysis of the motility and morphology of a moving object. These and other objects of the present invention will become apparent to those skilled in the art upon reference to the following specification, drawings, and claims.
The present invention intends to overcome the difficulties encountered heretofore. To that end, a microscope is used to optically section an object at a plurality of focal depths over a plurality of time penods The optical sections are digitized, and a tag allows identification of at least the time and the focal depth of each digitized optical section Image processing creates an outline of the penphery of the object through application of a complexity threshold algonthm A plurality of parameters representing the motility and morphology of the object are calculated A three dimensional graphical representation of the object is reconstructed form the plurality of digitized optical sections for computenzed viewing
Bnef Descnption of the Drawings Fig 1 is a component diagram of a 3-D digital image analysis system Fig 2 is a top plan view of the optical sectioning and outlining of a motile object at a plurality of focal depths
Fig 3 is a block diagram of the outlining process
Fig 4 is a top plan view of an optical section of an object and the digitized outline of the object Fig 5 is a dilation of the digitized outline of Fig 4
Fig 6 is an erosion of the digitized outline of Fig 5 Fig 7 is a top plan view of a digitized optical section
Fig 8 is a display from a graphical user interface depicting a plurality of parameter representing the motility and morphology of an object Fig 9 is a top plan view of a plurality of optical sections and their corresponding outlines at a plurality of focal depths
Fig 10a is a top plan view of a plurality of digitized optical sections at a plurality of focal depths, with the out of focus backgrounds subtracted out Fig. 10b is a stacked image reconstruction of the plurality of optical sections shown in
Fig. 10a viewed from a plurality of attitudes.
Fig. 10c is a faceted image reconstruction of the optical sections of Fig. 10a viewed at a plurality of attitudes. Fig. 11 is a top plan view of outlines of a plurality of digitized optical sections stacked with contour levels corresponding to focal depths.
Fig. 12 is a stacked image reconstruction with a slotted interior viewed over several periods of time, with a faceted image reconstruction of the portion of the object corresponding to the slot. Fig. 13 is a top plan view of an outline of a digitized optical section and a slot.
Fig. 14 is a slotted faceted image reconstruction of an object over several periods of time. Fig. 15 is an elevation view of the outline of a plurality of digitized optical sections with a lateral indentation.
Fig. 16 is a component diagram of an alternative 3-D image analysis system. Fig. 17 is an illustration of turning angles used to compute convexity and concavity.
Fig. 18 is a graph of speed versus time.
Fig. 19 is a top plan view of a digitized optical section and outline.
Fig. 20 is a top plan view of a digitized optical section and outline.
Fig. 21 is a top plan view of a digitized optical section and outline. Fig. 22 is a top plan view of a digitized optical section and outline.
Fig. 23 is a top plan view of a digitized optical section and outline.
Fig. 24 is a top plan view of a digitized optical section and outline. Detailed Description of the Invention In the drawings, Fig. 1 shows a 3-D digital image analysis system (DIAS) 10. The 3- D DIAS System 10 comprises an inverted compound microscope 12 fitted with differential interference contrast (DIC) optics, a camera 14, a VCR 16, a frame grabber 18, a character generator 20, a computer 22 having a serial port 24, a computer display terminal 26, and a key board 28. Additionally, a stepper motor 13 attaches to a focus knob 11 of the DIC microscope 12. In the preferred embodiment of the invention the stepper motor 13 comprises a computer programmed MicroStepZ3D stepping motor. The camera 14 configures for NTSC video, and in the preferred embodiment of the invention comprises a cooled CCD camera which can handle 30 frames per second without motion blurring. The VCR 16 comprises a conventional high quality tape recorder or video disk system, equipped with a frame grabber 18. In the preferred embodiment of the invention the frame grabber 18 configures for use with a Macintosh operating system based computer capable of grabbing 30 frames per second of at least a 3/4 size image and storing the results as a QuickTime movie. The computer 22 comprises a Macintosh computer, in particular a power PC based computer with a core processor speed of at least 225 megahertz, a two gigabyte hard drive, and forty- eight megabytes of RAM. The computer display terminal 26 is capable of pseudo three dimensional viewing through stereo graphics "crystal eyes" 3-D display screen with special glasses 29, or at least a fifty percent reduction in resolution and a standard color display with inexpensive red blue stereo glasses.
Those skilled in the art will appreciate the possibility of various changes and substitutions to the components of the 3-D DIAS System 10 without departing from the scope of the present invention. For example, the computer 22 can comprise any number of types and varieties of general purpose computers, or a digital camera with a direct link to the computer 22 could replace the camera 14 and VCR 16. Additionally, the preferred embodiment of the present invention utilizes differential interference contract microscopy. DIC optics has the advantage of high resolution microscopy, without the use of dyes or lasers, which may lead to the premature death of the organisms due to increases in heat and the effects of phototoxicity. Premature death leads to shortened periods of motility and dynamic morphology for analysis. By contrast, confocal optical systems, that use lasers, typically require application of stains or dyes to the motile objects. This will kill a living object which eliminates the possibility of analyzing the objects motility and morphology. Deconvolution methods involve phase or standard light microscope images, and presently do not exhibit sufficient optical quality to practice the present invention. In other words, while DIC microscopy comprise the preferred method of practice of the present invention, the possibility exists to use other microscopy techniques despite their drawbacks. The computer 22 performs the methods of the present invention under computer control through the use of programming means in the form of a 3-D DIAS software package (see microfiche appendix). The method begins by placing the sample object on the DIC microscope 12. Since typically the object comprises a living cell, the object is contained in a fluid filled viewing chamber (not shown). Accordingly, the supporting materials must be the correct width and chemical nature (glass vs. plastic vs. quartz) to be compatible with the focal depth and the light transmission for the particular objects used. Magnification must be selected which is compatible with the speed of cellular translocation, over a period of recording, and most importantly compatible with the size of the cell. The stepper motor 13 must be programmed so that one cycle spans the desired Z-axis focal depth. For example, for amoebae cells like Dictyostelium amoebae or polymorphonuclear leukocytes, which average fifteen microns in length and usually no more than ten microns in height, a Z-axis distance of ten to twenty microns is sufficient but the exact Z-axis distance must be empirically defined. Next, the method comprises optically sectioning an object at a plurality of focal depths over a first period of time. In order to accomplish this optical sectioning a scan rate must be chosen. A two second scan in either direction up or down including 30 optical sections is more than sufficient for the analysis of cells moving at velocities of seven to twenty microns per minute. This rate results in relatively small errors due to cell movement during the time of sectioning. Since reconstructions can be made from a scan up as well as down the Z-axis, a fast rate and a fast frequency of scanning would include sequential up and down scans each including 30 frames over one second through ten microns. Although the optical sections can be read directly into the frame grabber 18, it is more effective initially to make a video recording or tape for several reasons. First, image acquisition on video tape is relatively limitless and inexpensive and, therefore, will accommodate extended recording periods. Real time frame grabbing will have storage limits. Second, the image acquisition on tape allows the character generator 20 and the stepper motor 13 to notate each video frame for time, height, and direction of scan. As each of the plurality of optical sections are read the image from the camera 14 transfers to the VCR 16 then to frame grabber 18, and into the computer 22 via serial port 24. This process repeats for a plurality of focal depths over a first period of time. The focal depth varies through movement of the step remoter 13 fixed to the focus knob 11 of the DIC microscope 12. The frame grabber 18 digitizes each of the plurality of optical sections and then transfers the data to the computer 22. Simultaneously the stepper motor 13 and the character generator 20 transfer information to the computer 22, that associates a tag with each of the plurality of digitized optical sections The tag allows identification of at least the time and the focal depth corresponding to each of the plurality of digitized optical sections The data transfers into the computer 22, preferably a Macintosh computer, and results m the creation of a QuickTime movie The present invention also works with PICT stacks in addition to QuickTime movies Digitized optical sections can be read into the computer 22 at a maximum rate of thirty frames per second or if desired a lower rate such as ten or twenty frames per second Those of ordinary skill in the art will appreciate the applicability of the present invention to even higher rates of capture, as the technology develops A twenty mmute segment read in at thirty frames per second will take more than five hundred megabytes of storage on a hard disk The QuickTime movie is synchronized to the automatic up and down scans and the time of the scans are recorded in a synchronization file in the computer 22 The frames of the QuickTime movie are then extracted into a 3-D DIAS movie format, from which the user can select a number of digitized optical sections to be used in reconstructions, the interval between reconstructions, and image averaging For instance, the user may only need every other section for reconstructions The desired digitized optical sections are then stored in a compact 3-D DIAS movie file All subsequent 3-D DIAS procedures access this movie file format The QuickTime movie format is designed for smooth viewing in real time and provides a very slow direct access time of two seconds per frame The 3-D DIAS movie currently provides direct frame access at a rate of five frames per second, which is presently ten times faster than the QuickTime movies In addition, if the area of the object in a frame takes up a minonty of pixels the 3-D DIAS software performs a compression to reduce memory. Alternatively during the optical sectioning a user can reduce the size of the optical section to a specific window which contains only a portion of interest, thereby reducing the amount of digitized information. The 3-D DIAS movie allows for frame averaging to reduce background noise and accentuate the periphery of the object. For instance, at a rate of thirty frames per second, every three frames can be averaged in an overlapping fashion, resulting in the second to twenty-ninth optical section averaged with the two neighboring sections, and the two end sections (one and thirty) average with only one neighboring section.
Fig. 10a shows a portion of a set of twelve digitized optical sections 32 of a Dictyostelium amoebae at one micron increments taken in a two second period and averaged over three frames, providing in focus perimeters amenable to subsequent automatic outlining (see also Fig. 2).
After optically sectioning the object at a plurality of focal depths over several periods of time, and digitizing the plurality of optical sections 32, the next step comprises outlining the periphery of the objects for each of the plurality of digitized optical sections 32.
Since a typical twenty minute recording of a translocating cell, for example, in which thirty optical sections are performed in two seconds and repeated every five seconds, would include seven thousand two hundred optical sections, the automatic outlining program is clearly essential. For very high resolution reproductions, manual outlining is sometimes required, but the automatic feature of the present invention is highly efficient and can be rapidly edited if necessary, by the user.
Fig. 2 shows the before and after effect of outlining an object at a plurality of focal depths. Fig. 2a shows the original digitized optical sections 32 of an object at twelve different focal depths, and Fig. 2b shows the same digitized optical sections 32 with the corresponding outlines 38 included. The outline 38 attempts to trace the circumference of the in focus portion of the object. Fig. 2 shows that not only the size of the in focus portion of the object varies at different focal depths, but the surrounding background also varies. This comprises the significant challenge to the outlining process. In some portions of the digitized optical section 32 the boundary between the in focus portion and the out of focus portion represents a bright area, in other parts of the digitized optical sections 32 the boundary between the in focus and out of focus area represents a dark area. This means that a simple gray level thresholding technique, which selects or deselects pixels based solely on their grayscale value, cannot successfully perform the task of outlining the digitized optical sections 32. The present invention uses a combination of a variety of image processing techniques to accomplish the task of outlining the periphery of the digitized optical sections 32.
Fig. 3 shows in block diagram form the theoretical steps of the outlining process. Those of ordinary skill in the art will appreciate the fact that the order of the steps depicted in Fig. 3 can vary without departing from the intended scope of the present invention, and in some cases the computer 22 can perform the steps simultaneously.
Fig. 3 shows a smooth image step 102. which normally occurs at the beginning of image processing, to prepare the digitized optical section 32 for the actual outlining. Smoothing tends to remove the jagged and rough edges, and reduces the overall contrast. The smooth image step 102 involves standard smoothing techniques
The next step comprises the complexity threshold step 104. Complexity, in this case, is defined as the standard deviation from a mean pixel grayscale value within a 3x3 or 5x5 pixel neighborhood surrounding the pixel under analysis The neighborhood is referred to as a kernel Since the penmeter of a cell represents a boundary of high contrast, the standard deviation of the grayscale of a pixel at an edge, and the pixels on either side (inside and outside of the cell) will be high Therefore, the complexity will also be high In other words, for each of the digitized optical sections 32 the transition between the m focus region and the out of focus region is defined by an area of high grayscale contrast In this manner, examining a 3x3 or 5x5 kernel and calculating the standard deviation of the grayscales of the kernel allows for identifying the boundanes of the cell penphery for a particular digitized optical section 32 at a particular focal depth For each pixel, based on the pixel's corresponding kernel, a standard deviation representing the amount of grayscale vanation within the kernel is calculated A threshold value allows selecting only those pixels with a complexity value above the threshold Thus, kernels with a high standard deviation represent areas of high complexity based on a large amount of contrast in that pixel neighborhood Conversely, kernels of low standard deviation represent areas of low complexity due to the minimal amount of grayscale contrast This process effectively deselects the background of the image and also the mtenor of the object, since these regions of the digitized optical sections 32 tend to exhibit low contrast The actual threshold value can correlate to a grayscale level between 0 and 255, or a percentage between 0 and 100, with the low value representing regions of low complexity and the high value representing regions of high complexity, or any other similar designation Regardless of the specific designation, the threshold represents a cut off level all of the pixels whose kernels yield complexity levels below the threshold and these pixels receive a grayscale value of 255 (white). All of the pixels with complexity values above the threshold receive a grayscale value of 0 (black) For analysis purposes, therefore, the particular digitized optical section 32 converts to an image where the background and the cell interior appears white and only the periphery of the object appears black. The black areas then form the outline 38. Typically, increasing the complexity threshold value will shrink or remove the outline 38, while lowering the complexity threshold value will increase the area of the outline 38.
In most situations further image processing will prove necessary to complete the outlining process. In some instances high complexity regions may exist in the background areas far outside of the periphery of the object. In these circumstances, simply applying a complexity thresholding technique will not remove these regions. Additionally, another problem that can occur involves the fact that some regions of the periphery of the digitized optical sections 32 do not comprise areas of high complexity. For example, Fig. 4a-b show two digitized optical sections 117, 118 in which application of the complexity threshold did not form complete outlines 120, 122. In Fig. 4a, the digitized optical section 117 appears in two sections with a fuzzy low contrast transition between the two. Therefore, application of a complexity threshold did not properly outline the transition area (see outline 117 of Fig. 4b). Similarly, the digitized optical section 118 shows that a portion of the periphery comprises a fuzzy low contrast region, which an application of the complexity threshold technique failed to fully outline (see outline 118 of Fig. 4a). Accordingly, the outlines 120, 122 in Fig. 4b require further image processing. To deal with the situation of incomplete and partial outlines the 3D-DIAS System 10 provides the ability to dilate, erode, and smooth the digitized optical sections 32. Referring again to Fig. 4, applying the complexity threshold step 104 to digitized optical section 117 produces outline 120. Similarly, applying the complexity threshold step 104 to digitized optical section 118 produces outline 122 Fig 4 shows that both the outlines 120, 122 do not completely enclose their respective objects 1 16, 118 The first step in completing the outlines 120, 122 compnses the dilate step 106 (Fig 3) Dilation involves selecting every pixel that surrounds a black pixel and converting that pixel to a grayscale of 0 (black) Fig 5a shows the dilation process applied to the outlines 120. 122 This produces dilations 124, 126, or a broader outline that fills in the gaps m the onginal outlines 120, 122 of Fig 4b In particular, dilation involves adding the four honzontal and vertical neighbonng pixels for each pixel of the digitized outlines 120, 122 appeanng in Fig 4b The dilation process fattens the object by the amount of dilation In this manner, the gaps that appeared in the ongmal outlines 120, 122 fill in Next, the outer penmeter of dilation 124 and dilation 126 are outlined creating a dilated outline 128 and a dilated outline 130 shown in Fig 5b
The 3D-DIAS System 10 utilizes additional image processing to smooth the black pixels remaining after the dilate step 106 Fig 3 shows the smooth outline step at 108 Again, the smooth outline step 108 utilizes standard smoothing techniques For example, one smoothing technique involves converting the locations of all non- white pixels to a floating point number, and then averaging the pixel locations for a neighborhood Then, a pixel is added at the a location as close as possible to the average location This reduces the roughness of the outline 38
Application of a grayscale threshold step 110 can further enhance the image processing The grayscale threshold step 110 merely removes pixels with grayscale values below the grayscale threshold value As noted previously, grayscale typically vanes from 0 (white) to 255 (black), however, the grayscale threshold can be expressed in a percent from 0% (white) to 100% (black) This step effectively reduces any remaining residual background areas
A further technique to solve the problem of residual background areas, compnses application of a minimum pixel filter step 112 The minimum pixel filter step 112 searches for continuous black pixel regions where the number of pixels equals a number less than the minimum pixel filter value, and then removes these pixel regions This allows removal of small, high contrast regions appeanng in the background of the digitized optical section 32 While the default for the minimum pixel filter value comprises twenty-five, most of the outlined background consists of groups of pixels of between five and ten pixels Typically, a minimum pixel filter value of between five and ten will allow for the removal of these unwanted background objects without mterfenng with the outline 38 of the digitized optical section 32 In a similar manner, Fig 3 shows a maximum pixel filter step 114 The maximum pixel filter step 1 14 allows for the elimination of large unwanted areas that appear withm the digitized optical section 32 The maximum pixel filter step 1 14 selects those regions of the digitized optical section 32 with continuous pixel groupings above the maximum pixel filter size The default maximum pixel filter value equals twenty thousand, but of course, will vary based on the specific application
Further image processing compnses the erode step 115 (Fig 3) Eroding the dilated outline 128, 130 creates an eroded outline 132 and an eroded outline 134 respectively (Fig 6) The necessity of the erode step 115 results from the pnor dilate step 106 and smoothing steps 102, 106 which all serve to fatten the outline 38 Therefore, to return the outline 38 to the proper size requires eroding the outline 38 by the number of dilation steps plus the number of times the outline 38 is smoothed. Refernng to Fig 6, the erosion process moves each pixel of the dilated outlines 128, 130 mward a distance of one pixel. In this manner, the eroded outlines 132, 134 now more accurately reflect the penphery of the object in the digitized optical sections 117, 118. In the preferred embodiment of the invention the dilate default equals three, since the erode default equals two and the smooth outline default equals one
Fig. 9 shows a further illustration of the result of outlining. Fig. 9 shows a plurality of digitized optical sections 32, each taken at a different focal depth, and the associated outline 38 of each digitized optical section 32 In this case, not only do the outlines 38 change in size and shape, but some of the outlines 32 contain more than one distinct circumscnbed area. Those of ordinary skill in the art will appreciate the fact that optimizing the outlining parameters compnses a tnal and error process, that involves varying not only the outlining parameters but the number of times each step is performed After selecting the optimum imaging parameters, however, all of the plurality of digitized optical sections 32 can be processed with the optimized parameters To illustrate the optimization process, Figs 19-21 show the effect of varying the number of times the smooth image step 102 is performed. In Fig. 19 the smooth image step 102 is performed once, in Fig. 20 the smooth image step 102 is performed twice, and m Fig. 21 the smooth image step 102 is performed four times Increasing the smoothing of the image effectively reduces the sharpness of the image, and, therefore, reduces the complexity of the digitized optical section 38. This reduces the area of the outline 38 since the smoothing reduces the contrast of the digitized optical section 38.
Figs. 22-24 show the effect of diffenng combinations of the dilate step 106 and the erode step 115. In Figs. 22-24 the smooth image step 102 is performed once, and the smooth outline step 108 is performed three times. In Fig. 22 the dilate step 106 is performed twice, and the erode step 115 is not performed. In Fig. 23 the dilate step 106 is performed three times, and the erode step 115 is performed six times. In Fig. 24 the dilate step 106 is performed three times, and the erode step 115 is performed eight times. The overall effect shown in Figs. 22- 24 comprises increasing the gap between the number of dilate steps 106 and the number of erode steps 115, which in general reduces the size of the outline 38. Also, increasing the number of dilate steps 106 and erode steps 115 between the values depicted in Fig. 22 and Fig. 23 helped to better fill in a particularly bright portion of outline 38. The preceding examples of the effect of altering the outlining parameters merely demonstrate the type of iterative process required for optimization, and illustrates some general trends applicable to changing certain parameters. The specific effect, of course, will vary depending on the exact circumstances of the application.
Despite the overall effectiveness of the automatic outlining method, some instances may require manual outlining. For example, Fig. 7 shows an outline 38 with a lateral indentation 78. The outline 38 represents the ideal, or perfect, outline 38. Applying the above outlining parameters could result in filing in the lateral indentation 78 with outline 76 (shown in phantom). In this type of situation the 3D-DIAS system 10 provides for the possibility of manual outlining. After optically sectioning the object, digitizing the optical sections, and outlining the digitized optical sections 32, the next step comprises reconstructing from the plurality of digitized optical sections 32 a three dimensional graphical reconstruction of the object for computerized viewing. The 3D-DIAS System 10 contemplates two types of reconstructions. First, a three dimensional elapsed time stacked image reconstruction 34 shown in Fig. 10b. Second, a three dimensional elapsed time faceted image reconstruction 36 shown in Fig. 10c. The stacked image reconstruction 34 essentially comprises stacking each of the digitized optical sections 32, wherein the focal depth of the digitized optical sections 32 translates into a height. Fig. 10a shows a plurality of twelve digitized optical sections 32 each at a different focal depth. The computer 22, again under the control of programming means, constructs a stacked image reconstruction 34 by stacking each of the digitized optical sections 32 by height. The first stacked image reconstruction 34 of Fig. 10b shows the digitized optical sections from a 0° viewing attitude, with each digitized optical section labeled from one to twelve. Thus, the digitized optical section 32 appearing in Fig. 10a (1) appears at the bottom of the stacked image reconstruction 34 shown in Fig. 10b at 0°, and the digitized optical section 32 appearing in Fig. 10a (12) appears at the top of the same stacked image reconstruction 34. The stacked image reconstruction 34 viewed from the 0° viewing attitude only displays a side view of each digitized optical section 32, but clearly shows the height spacing between each digitized optical section 32.
Each stacked image reconstruction 34 displays only that portion of each of the plurality of digitized optical sections 32 defined by the outline 38 of the digitized optical sections 32, and visible from the particular viewing attitude. The 30° stacked image reconstruction 34 of Fig. 10b shows the digitized optical sections 32 of Fig. 10a viewed from a viewing attitude of 30° above the horizontal. In this manner, the edges of the digitized optical sections 32 overlap each other clearly showing the three-dimensional nature of the stacked image reconstruction 34. The stacked image reconstructions 34 essentially comprises overlapping a series of two dimensional digitized optical sections 32, and then displaying only that portion of the digitized optical sections 32 not overlapped or hidden by an underlying digitized optical section 32. For example, starting with the lowest level digitized optical section 32 shown in Fig. 10a (1), each subsequent digitized optical section 32 stacks over the top of the previous digitized optical section 32 The computer assigns a grayscale value to each point of each of the plurality of digitized optical sections 32, with the grayscale of each digitized optical section 32 decreasing by height As each digitized optical section 32 is laid over the lower digitized optical section 32, that portion of the proceeding digitized optical section 32 overlapped by the newly applied digitized optical section 32 is no longer visible from that particular viewing attitude Fig 10b also shows the same stacked image reconstruction 34 displayed from a 60° viewing attitude and a 90° viewing attitude, which expose for viewing different portions of the digitized optical sections 32 By creating a stacked image reconstruction 34 for each penod of time of optical sectioning, and displaying each stacked image reconstruction 34, the 3D-DIAS System 10 creates and displays a three dimensional elapsed time stacked image reconstruction of the object
Fig 10c shows a faceted image reconstruction 36 of the plurality of digitized optical sections 32 appeanng in Fig 10a The facet image reconstruction method begins by constructing a top wrap 84 and a bottom wrap 86 (see also Fig 15 ) Conceptually, the top wrap 84 is essentially identical to the stacked image reconstruction 34 shown in Fig 10b viewed from a 90° attitude, and the bottom wrap 86 consists of the same stacked image reconstruction 34 viewed from a minus 90° attitude In other words, in abstract terms the faceted image reconstruction 36 consists of dividing the stacked image reconstruction 34 into a top wrap 84 and a bottom wrap 86 Then by casting a fishing net over the top wrap 84, the webbmg of the fishing net forms facets 94 that define the outer penmeter of the top wrap 84 In an identical manner, the process repeats for the bottom wrap 86 The last step involves joining the top wrap 84 and the bottom wrap 86 at a seam This process essentially creates a mathematical 3-D model of the enclosed object In more particular terms, the process of creating the top wrap 84 involves the following steps. First, assigning each of the plurality of digitized optical sections 32 a height corresponding to the focal depth. This process is identical to the process of assigning heights used to create the stacked image reconstructions 34. Next, the computer 22 of the 3D-DIAS System 10 identifying pixels corresponding to only that portion of the area of each of the plurality of digitized sections 32 defined by the outline 38, and not overlapped by another digitized optical section as viewed from the top of the reconstruction. Essentially, this involves creating the stacked image reconstruction 34 of Fig. 10b viewed from the 90° attitude. Thus, the digitized optical section 32 shown in Fig. 10a at (12) appears at the top of the stack, and the digitized optical section 32 shown in Fig. 10a at (11) appears directly underneath. However, that portion of the digitized optical section 32 of Fig. 10a shown at (11), overlapped by the digitized optical section 32 shown in Fig. 10a at (12) does not appear. The process repeats until the appropriate portions of each of the plurality of digitized optical sections 32 appears in the top wrap 84. Next, each identified pixel is assigned an X, Y and Z coordinate, where the X and Y coordinate correlates to the pixels row and column and the Z coordinate represents the height of the location of the pixel in the faceted image reconstruction 36. If the particular pixel 95 happens to lie exactly on the outline 38 of a particular digitized optical section 32 (see Fig. 11), then the Z coordinate equals the height of that particular digitized optical section 32. Using the analogy of a contour map, the pixels that lie directly on an outline 38 of a digitized optical section 32 essentially lie exactly on a contour line. This allows for quickly determining the Z coordinate for these particular pixels. For pixels, like pixel 96, lying within a particular outline 38, but not actually on the outline 38, the Z coordinate is assigned a height based on a weighted average. For those pixels lying outside of the outline, for example background pixels, the Z coordinate of these pixels can be designated an easily recognizable arbitrary number like one million.
Those skilled in the art will realize that a number of techniques can accomplish the calculation of the weighted average. One such technique, however, contemplated by the present invention involves defining a plurality of rays extending from each of the pixels within any of the outlines 38 of any of the digitized optical sections 32 to the next nearest outline 38 Additionally, the weighting scheme involves weighting the shortest rays more than the longest rays In particular, Fig 11 shows a plurality of digitized optical sections 32 stacked according to height, where the height conesponds to the focal depth of the particular digitized optical section 32. In this manner, the plurality of digitized optical sections 32 take on the look of a contour map Fig. 1 1 shows a pixel 96 located between a zero micron contour level 66, a plus one micron contour level 68, and a plus two micron contour level 70. Since the pixel 96 does not he directly on any of the outlines 38, the Z coordinate of the pixel 96 must equal a value somewhere between the heights of the surrounding outlines 38 of the digitized optical sections 32. One method to calculate the Z coordinate value of the pixel 96 involves drawing a plurality of rays from the pixel 96 to the surrounding outlines 38 and weighting the shorter rays more than the longer rays. Fig. 11 shows eight rays extending at 45° angles from the pixel 96. A first ray 50, a second ray 52, a third ray 54, a fourth ray 56, a fifth ray 58, and a sixth ray 60 all extend from the pixel 96 to the zero micron contour level 66. A seventh ray 62 extends from the pixel 96 to the plus one micron contour level 68, and an eighth ray 64 extends from the pixel 96 to the plus two micron contour level 70. In this manner, each of the eight rays 50-64 extends a certain length of L1-L8, and contacts an outline 38 of a particular height of H1-H8. Calculation of the Z coordinate of the pixel 96
proceeds by using an equation that weights the heights H1-H8 in an inverse proportion to
their lengths L1-L8, m the following manner:
H1(l/L,)+H,(l/L,)+H,(l/L,)+H,(l/L,)+H.(l/Ls)+H.(l/L.)+H7(l/L7)+Hg(l/Lg) " (1/L,) + (1/L2) + (1/L3) + (1/L4) +~(1/L5) + (1/L + (l7L7) + (1/L8)
This process repeats until each pixel of the top wrap 84 is assigned a X, Y and a Z
coordinate. In a converse fashion, the computer 22 of the 3D-DIAS System 10 constructs a
bottom wrap 86 The only difference between the bottom wrap 86 and the top wrap 84
involves the viewing attitude of the stacked image reconstruction 34 The top wrap 84 uses a
viewing attitude of 90°, while the bottom wrap 86 repeats the process with a viewing attitude
of minus 90° In other words, the top wrap 84 views the stacked image reconstruction 34
from the top, and the bottom wrap 86 views the stacked image reconstruction 34 from the
bottom
After creating the bottom wrap 86, and the top wrap 84 the next step in the method
compnses dividing the bottom wrap 86 and the top wrap 84 into facets, where the facets 94
(see Fig. 15) encompass the surface area of the top and bottom wraps 84, 86. The facets 94
compnse the area between the vertical and honzontal contour lines that surround the top and
bottom wraps 84, 86. The vertices of the facets 94 have X, Y and Z coordinates defined by
the above process, and are, therefore, easily quantified. While Fig. 15 shows the facets 94,
Fig. 10c provides a better illustration but the facets are too numerous to provide for
numbering. Fig. 10c shows the faceted image reconstruction 36 of the digitized optical
sections 32 shown in Fig. 10a, viewed from a plurality of viewing attitudes. The faceted image reconstruction 36 of Fig 10c viewed at an attitude of 90° shows the facets 94 of the top wrap 84 Dividing the top wrap 84 and the bottom wrap 86 according to vertical and honzontal contour lines creates the facets 94 at the intersections of the contour lines Therefore, the penmeter of each facet 94 is defined by a pixel with a X, Y and Z coordinate By evaluating the X, Y and Z coordinates of the pixels of the top wrap 84 and the bottom wrap 86, allows identification of a seam which defines the intersection of the facets 94 of the top wrap 84 and the facets 94 of the bottom wrap 86 Joining the top wrap 84 and the bottom wrap 86 at the seam allows creation of the faceted image reconstruction 36. and repeating this process over several penods of time allows creation of a three dimensional elapsed time faceted image reconstruction
One difficulty encountered with the faceted image reconstruction process involves the inability to accurately depict advanced and complicated contours Fig 15 shows a stacked image reconstruction 34 of a plurality of a digitized optical sections 32 which contains a pronounced lateral indentation 78 For pronounced lateral indentations, like the lateral indentation 78 of Fig 15, the faceted image reconstruction method will not accurately descnbe the area of indentation Digitized optical sections 32 above and below the lateral indentation 78 overhang the lateral indentation 78 Again, refernng to the fishing net analogy, casting a net over the stacked image reconstruction 34 will not completely define the surface area defined by the digitized optical sections 32 The solution to this problem involves identifying the lateral indentation 78 at the maximum point of advance, and then subdividing the object at the maximum point of advance creating a top partial wrap 80 and a bottom partial wrap 82. Next, the method proceeds by performing the aforementioned steps of creating the faceted image reconstruction 36 on the top partial wrap 80, and the bottom partial wrap 82. Identification of the lateral indentation 78 generally requires manual intervention, wherein the necessity of identifying the lateral indentation 78 will depend on the particular circumstances and the contour of the particular object involved. Joining the top partial wrap 80 and the bottom partial wrap 82 at their seam results in creation of a partial faceted image reconstruction 98. The partial faceted image reconstruction 98 clearly shows the lateral indentation 78. Since the lateral indentation 78 could appear in either the top wrap 84 or the bottom wrap 86, the process of creating the partial faceted image reconstruction 98 merely involves dividing either the top wrap 84 or the bottom wrap 86 at the lateral indentation 78, and then separately processing the top partial wrap 80, and the bottom partial wrap 82. This process, of course, can repeat in order to define successive lateral indentations. Some situations may require tracking the motility and morphology of an interior portion of the moving object. Fig. 12 shows an example of such a situation. Fig. 12 shows a stacked image reconstruction 34 of a cell over a plurality of time periods. Each of the stacked image reconstructions 34 of the cell contain a slot 40, representing the location of the nucleus of the cell. Below each of the stacked image reconstructions 34 appears a faceted slot 42 representing the nucleus of the cell. Fig. 13 shows a single digitized optical section 32 with a slot 40, and a slice 74 which divides the digitized optical section 32 and the slot 40 into two portions. Creating the stacked image reconstruction 34 of Fig. 12 involves outlining each digitized optical section 32, identifying a slot 40 in each of the digitized optical sections 32, and dividing each slot 40 of each digitized optical section 32 at a slice 74. The stacked image reconstruction 34 involves stacking one of the portions of the digitized optical sections 32 defined by the slice 74. This allows viewing both the stacked image reconstruction 34 and the slot 40 in the same image. Outlining the slot 40 can involve the aforementioned automatic outlining process; or can proceed manually.
Fig. 14 shows an example of a plurality of faceted image reconstructions 36 over a period of time including a first faceted slot 44, a second faceted slot 46, and a third faceted slot 48. Fig. 14 shows the faceted image reconstruction 36 at seven different time periods, and from two different viewing attitudes. The top group of faceted image reconstructions 36 appears at a 0° viewing attitude, while the bottom group of faceted image reconstructions 36 appears at a 90° viewing attitude. In this manner, Fig. 14 shows that the method of the present invention can depict the motility and moφhology of not only a moving object, but of selected portions of the moving object.
The reconstruction methods of the present invention provides a three dimensional mathematical model for computing motility and dynamic moφhology parameters. Fig. 8 shows an example of a graphical user interface screen to allow a user to select from a plurality of parameters representing the motility and moφhology of an object. Calculation of parameters representing the motility and dynamic moφhology of an object requires defining the following notation: Notation:
"F" equals the total number of digitized optical sections involved in the calculation, while "f "equals the digitized optical section subject to the current calculation; "X[fj,Y[fj" equals the coordinates of the centroid of digitized optical section f, where 1< f < F; "I" equals the centroid increment and defines what previous and subsequent mean (for example a centroid increment of I means the centroid based calculations of the N'th digitized optical section use the N-I previous digitized optical section and the N+I subsequent digitized optical section), increasing the centroid increment tends to smooth the particular value, and reduces sudden uneven jumps; "n" equals the number of pixels in a digitized optical section's outline, where Pj . . .
PN represents the n individual pixels, and where Pxn and Pyn comprises the X and Y coordinates of the n'th pixel; "frate" equals the number of digitized optical sections per unit of time; "scale" equals the scale factor in distance units per pixel; "sqrt[number]" returns the square root of the number;
"abs[numberj" returns the absolute value of the number;
"angle[X, Y]" returns the angle in degrees between a vector with origin (X, Y) and the
X axis, with positive angles measured counter-clockwise; "NAN" equals NOT A NUMBER, an arbitrary large designation (1,000,000 for example) generally used to indicate a non-processed value; and
"Central Difference Method" (CDM), CDM calculations use the previous and subsequent centroids in the calculation, while non CDM calculations use only the pervious centroid. Parameters: Speed:
For (f - K l),
Speed[fJ = 0; For (f- I ≥ l), Speed[f] = (scale)(frate)(sqrt[ ((X[f] - X[f-I])/I)2 + ((Y[fJ - Y[f-I])/I)2 Speed (CDM):
For(f-I> l)and(f+I≤F),
Speed[f] = (scale)(frate)(sqrt[ ((X[f+I] - X[f-I])/I)2 + ((Y[f+I] - Y[f- I])/I)2
For(f-Kl)and(f+I≤F),
Speed[f] = (scale)(frate)(sqrt[ ((X[f+I] - X[f])/I)2 + ((Y[f+I] - Y[fJ)/I)2 For(f-I> l)and(f+I>F),
Speed[f] = (scale)(frate)(sqrt[ ((X[fj - X[f-I])/I)2 + ((Y[f] - Y[f-I])/I)2 For all other f,
Speed[f] = 0 Direction:
For(f-I≥l),
Dir[fJ = angle[ (X[fj - X[f-I]) , (Y[f] -Y[f-I) ] For(f- Kl)and(f+I≤F),
Dirffj = angle[ (X[f+I] - X[f]) , (Y[f+I] -Y[fJ) ] For all other f,
Dir[fJ = 0 Direction (CDM): For(f-I> l)and(f+I≤F),
Dιr[f] = angle[ (X[f+I] - X[f-I]) , (Y[f+I] -Y[f-I) ] For(f- 1 <l)and(f+I<F),
Dir[f] = angle[ (X[f+I] - X[fJ) , (Y[f+I] -Y[f]) ] For(f-l >l)and(f+I>F),
Dir[fJ = angle[ (X[fj - X[f-I]) , (Y[fj -Y[f-I]) ] For all other f,
Dir[f] = 0 Note - Multiples of ± 360° add to the direction to make the graph continuous.
For example, on object moving in a spiral would have directions: 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°, 360°, 405°, ect. Direction Change:
For(f-Kl), DirCh[fJ = 0
For all other f,
DirCh[f] = Abs[ DirffJ - Dir[f-I] ] Note - if the direction changes is greater than 180° it is subtracted from 360°. This always gives values between 0° and 180°. Acceleration:
For(f-I≥l),
Acc[f] = Speed[f] - Speed[f-I] For all other f,
Acc[f] = 0 Acceleration (CDM):
For (f-I≥l)and(f+I<f),
Acc[f] = (Speed[f+I] + Speed[f-I])/2 For(f-Kl)and(f+I< F), Acc[fJ = (Speed[f+I] + Speed[fJ)/2
For ( f - I > l) and (f+ I > F),
Acc[f] = (Speed[f] + Speed[f-I])/2 For all other f, Acc[f] = 0
Persistence:
Persis[f] = Speed[f]/((1 + 100/360)(DirChg[f])) Note - Persistence is essentially speed divided by the direction change (converted from degrees to grads). One is added to the denominator to prevent division by 0. If an object is not turning the persistence equals the speed.
Centroid:
Figure imgf000031_0001
CenY[f] == ! 2>J >,,/n
Note - To convert the centroid to a meaningful number requires multiplication by the scale factor.
Axis Tilt:
This first requires defining the Major Axis of the digitized optical section. This involves finding the pixel furthest from the centroid, this pixel becomes the first end point of the major axis. The second end point of the major axis comprises the pixel furthest from the first end point of the major axis. Thus, the major axis equals the chord connecting the first end point to the second end point. Tiltff] = angle in degrees between the major axis and the horizontal axis
Note - Multiples of ± 180° are added to the axis tilt continuity. In this case divide the axis tilt by 180 and take the remainder. Thus, the graph of axis tilt versus time for an oblong object spinning at a constant rate will have a constant positive slope for a counter-clockwise spin.
Mean Width:
MeanWidfh[f] = Area[f]/MaxLen[fj Maximum Width:
MaxWid[f] = length of the longest chord peφendicular to the major axis Central Width:
CenWid[f] = length of the chord peφendicular to the major axis and passing though the centroid X Bounded Width:
XWid[f] = width of the smallest rectangle enclosing the digitized optical section's outline
Maximum Length:
MaxLen[f] = length of the major axis
Y Bounded Width:
YWid[f] = height of the smallest rectangle enclosing the digitized optical section's outline
X Slice Width:
XS Wid[f] = the length of the longest chord parallel to the XWid[f]
Y Slice Width: YSWιd[fJ = the length of the longest chord parallel to the YWidffJ
Area
Where Area equals the area of the outline of the digitized optical
section shape minus any holes Let X[ι],Y[ι] for ι=0 n be the vertices of
the outline such that X[0] = X[n] and Y[0] = Y[n] (the first vertex is the last
vertex) Further, let dx[ι] = X[ι+1] - X[ι] and dy[ι] = Y[ι+l] - Y[ι] Then by
Green's Theorem the area is the following
Area[f] = 0 5(abs[ ∑ (x[ι])(d [ι]) - (y[ι]dx[ι]) ]) i 0
Penmeter
The penmeter equals the penmeter of the outline of the digitized
optical section plus the penmeter of any holes Let X[ι],Y[ι] for ι=0 n be
the vertices of the outline such that X[0] = X[n] and Y[0] = Y[n] (the first
vertex is the last vertex) Further, let dx[ι] = X[ι+1] - X[ι] and dy[ι] = Y[ι+1] -
Y[ι] Then the penmeter if the following
Figure imgf000033_0001
Penmeter[f] = 2^ qrt[dx[ι + dv[ιY ]
Roundness
Round[fj = (100)(4π)(Area[f]/Penm[f]2)
Roundness is a measure (in percent) of how efficiently a given amount
of penmeter encloses an area A circle has roundness of 100%, while a
straight line has a roundness of 0% The factor of 4π in the formula ensures a
roundness values of 100% for a circle The penmeter is squared to make the
roundness mvanant (i.e. dimensionless) Predicted Volume:
Vol[f] = (4π/3)(MaxLen[f]/2)(MeanWid[f]/2)2
The predicted volume Vol[f] is the volume of the ellipsoid, with circular cross-section, having length MaxLen[f] and width MeanWid[fj. Predicted Surface:
Sur[f] = (CF)(π)(MaxLen[f])(MeanWid[fj)
The predicted surface area Sur[f] equals the surface area of the ellipsoid, with circular cross-section, having length MaxLenff] and width
MeanWid[fJ. Where CF is the ellipsoidal surface correction factor defined by
CF = U (sin[X])(sqrt[ sm2[X] + (r)(cos2[X])])dx
Where r = MeanWid[fj/MaxLen[fJ. Using Simpson's Rule with N=10000, the computer approximates the solution of CF with the following polynomial. CF = 0.15r2 + 0.065r + 0.785 Mean Radial Length: The mean radial length RadLen[fJ is the average distance from the centroid to the boundary pixels. Let n be the number of vertices (equal to the number of boundary pixels) of the digitized optical section's outline, indexed from 0 to n-1. Let L[i] equal the distance from the i'th vertex to the centroid. Then
RadLen[fj = ∑.φ]/n
1=0
Radial Deviation: The radial deviation RadDev[fj equals the ratio of the standard deviation of the above average to that average itself in percent. Let SD equal the standard deviation of L[0] . . . L[n-1]. RadDev[f] = (100)SD/RadLen[fJ Convexity and Concavity:
To compute Convex[f] and Concav[f] requires drawing line segments connecting each vertices of the outline. The angles of turning 116 from on segment to the next are measured (Fig. 17). Counter-clockwise turning represents a positive angle, while clockwise turning a negative angle. For a closed outline, these angles always add up to 360°. The procedure repeats for holes in the outline.
Convex[f] = sum of the positive turning angles Concav[f] = abs[ sum of the negative turning angles] Also, Convex[f] -Concav[f] = (360)(1 + Number of Holes). Convexity and concavity measure the relative complexity of the shape of the outline. For example, the convexity of a circle equals 360 and the concavity equals 0. Positive and Negative Flow:
Positive flow essentially measures the amount of new area formed in a certain amount of time (or in the flow increment), expressed in percent.
Conversely, negative flow measures the amount of area lost over the period of time designated by the flow increment in percent. In other words, positive and negative flow measure the percent of area expansion and contraction of an
33
SUBSTΓΠΠΈ SHEET (RULE 26) object over a period of time. In particular, let f equal the current frame and FI equal the flow increment. Let A equal the interior of the f-FI outline, minus any holes, and B equal the interior of the f th outline, minus any holes (with positive and negative flow undefined for f-FI < 1). Furthermore, let P equal the area in B not present in A, or P = B - A. Let N equal the area in A not present in B, or N = A - B. Then PosFlow[fJ = (100)Area(P)/Area(A) NegFlow[fJ = (100)(Area(N)/Area(A)
An additional option for calculation of flow involves fixing the centroids over the flow increment. This aligns the B and A area so that the centroids overlap prior to computing flow, and subtracts out centroid movement from the shape change.
Sectors: Sector Area, Sector Perimeter, Sector Positive Flow, and Sector
Negative Flow comprise derivative measurements of the respective standard parameters. The sector measurements allow parametization of a subset, or sector, of a particular outline. The user inputs the beginning and ending flow in degrees, and the flow range is divided into four sectors. For example, entering 0 and 360 will produce four sectors with sector 1 consisting of 0° to
90°, sector 2 consisting of 91° to 180°, sector 3 consisting of 181° to 270°, and sector 4 consisting of 271° to 360°. The following summarizes a number of three dimensional parameters representing the motility and dynamic moφhology of an object: 3D Centroid:
An average of all the X coordinates of each facet vertex, average of all of the Y coordinates of the each facet vertex, and the average of all of the Z coordinates of each facet vertex. 3D Surface Area:
The sum of all the surface areas of each facet. 3D Volume: This involves first converting each facet into a prism by extending the facet inward to the centroid. The volume then equals the sum of the volumes of each prism. 3D Height:
The difference between the highest and lowest Z coordinate. 3D Bulk Height:
In the case where the highest or lowest Z coordinate comes from an extruding thin tendril of the object, the bulk height might yield more meaningful information. The first step comprises setting a 3D volume threshold in percent. The 3D Bulk Height equals the 3D Height after eliminating a portion of the 3D Volume equal to the threshold percentage.
3D Length:
The longest chord extending through the centroid from one facet to another. 3D Width: The longest chord coming from the disc defined by all chords passing through the centroid and peφendicular to the 3D Length. Sphericity:
The 3D analog of roundness, essentially a measurement of the efficiency of enclosing the 3D Volume with the 3D Area in percent. The
Sphericity essential comprises an invariant ratio of the area to the volume. The sphericity of a perfect sphere would equal 100%. Overhang:
This would measure the amount any portion of the object overhangs a base of the object, in terms of a ratio scaled to from 0 (no overhang) to 100
(maximum overhang). For a given stacked image reconstruction, let A equal the width of the base digitized optical section's outline as viewed from a given attitude. Let B equal the width of the widest digitized optical section's outline. Then the Overhang equals the ratio of B:A. Area o f Proj ection :
The Area of Projection equals the ratio to the length of the digitized optical section's outline with the greatest area to the length of the base digitized optical section's outline. Those of ordinary skill in the art will realize the possibility of converting any number of the two dimensional parameters into three dimensional parameters. For example, all of the centroid based parameters easily convert to three dimensional parameters by substituting the 3D Centroid. Additionally, the Graphical User Interface allows for the plotting of graphs of each of the parameters. Fig. 18 shows an example of a graph of Speed versus time. Fig. 16 shows an alternative embodiment of the 3D-DIAS Computer System 10 configured for parallel processing. The configuration involves connecting ten state of the art parallel processors 90 through a network connection. The parallel processors 90 can comprise power PC based Macintosh clones communicating over a vast ethernet network (two megabytes per second transfer rate) or the accelerated SCSI ports (ten megabytes per second). It is anticipated that by utilizing 225 megahertz power PC based computers connected by ethernet the 3D-DIAS System 10 can accomplish near real time reconstruction and analysis. In the configuration shown in Fig. 16, a distribution work station 88 controls the frame grabber 18. The ten parallel processors 90 perform the steps of outlining, and an integration work station 92 integrates the information from the parallel processors 90 and generates the reconstructed images. The reconstructed image is then played as a dynamic 3D image with four superimposed mini screens which display selected parameters on the computer display terminal 26. The parallel processing system utilizes software which includes a program for separating the digitized optical sections 32 between the parallel processors 90, and software for reintegrating the information from the parallel processors 90 in the integration work station 92.
The foregoing detailed description of the present invention comprises an exemplary embodiment. An additional embodiment, included herewith in the form a micro-fiche deposit of the 3D-DIAS software source code, will vary in some specific aspects of implementation with out departing from the scope of the present invention. The foregoing description and drawings comprise illustrative embodiments of the present inventions. The foregoing embodiments and the methods described herein may vary based on the ability, experience, and preference of those skilled in the art. Merely listing the steps of the method in a certain order does not constitute any limitation on the order of the steps of the method. The foregoing description and drawings merely explain and illustrate the invention, and the invention is not limited thereto, except insofar as the claims are so limited. Those skilled in the art who have the disclosure before them will be able to make modifications and variations therein without departing form the scope of the invention.

Claims

I claim:
1. A method for three dimensional elapsed time analysis of the motility and moφhology of a moving object, said method comprising: a) optically sectioning an object at a plurality of focal depths over a first period of time; b) digitizing each of said plurality of optical sections; c) identifying with each of said plurality of digitized optical sections a tag that allows identification of at least said time and said focal depth of each of said plurality of digitized optical sections; d) outlining the periphery of said object for each of said plurality of digitized optical sections; e) calculating from said plurality of digitized optical sections a plurality of parameters representing the motility and moφhology of said object; f) reconstructing from said plurality of digitized optical sections a three dimensional graphical reconstruction of said object for computerized viewing; and g) repeating said steps of said method for at least one other penod of time.
2. The invention in accordance with claim 1 further comprising displaying said graphical representation with a pseudo-three-dimensional viewing means.
3. The invention in accordance with claim 1 further comprising the step of windowing said plurality of optical sections whereby only a portion of interest of said plurality of optical sections is digitized.
4. The invention in accordance with claim 1 wherein said object is optically sectioned by differential interference contrast microscopy.
5. The invention in accordance with claim 1 wherein said optical sections are digitized by a frame grabber.
6. The invention in accordance with claim 1 wherein said optical sections are digitized by a digital camera.
7. The invention in accordance with claim 1 wherein said outlining utilizes a complexity algorithm.
8. The invention in accordance with claim 1 wherein said outlining further comprises: a) applying a complexity algorithm to each pixel of said digitized optical section thereby removing said pixels below said complexity threshold; b) dilating each of said remaining pixels of said digitized optical section; c) smoothing each of said remaining pixels of said digitized optical section; d) applying a grayscale threshold to each of said remaining pixels thereby removing said pixels below said grayscale threshold; e) applying a minimum pixel group filter thereby removing any pixel groups below said minimum pixel group size; f) applying a maximum pixel filter thereby removing any pixel groups above said maximum pixel filter; and g) eroding said remaining pixels of said digitized optical section.
. The invention in accordance with claim 8 further comprising eroding each of said remaining pixels of said digitized optical section a number of times equal to the number of said dilating steps plus the number of said smoothing steps.
10. The invention in accordance with claim 1 wherein said graphical representation of said object comprises creating a three dimensional elapsed time stacked image reconstruction.
11. The invention in accordance with claim 10 wherein the steps of creating said three dimensional elapsed time stacked image reconstruction comprise: a) selecting an attitude for viewing said reconstruction; b) assigning said plurality of digitized optical sections a height corresponding to said focal depth as viewed from said attitude; and c) identifying pixels corresponding to only that portion of an area of each of said plurality of digitized optical sections defined by said outline and not overlapped by another of said plurality of digitized optical sections, as viewed from said viewing attitude.
12. The invention in accordance with claim 1 wherein said graphical representation of said object comprises creating a three dimensional elapsed time faceted image reconstruction.
3. The invention in accordance with claim 12 wherein said steps of creating said three dimensional elapsed time faceted image reconstruction comprises: a) creating a top wrap, said steps of creating said top wrap comprising: i) assigning said plurality of digitized optical sections a height corresponding to said focal depth; ii) identifying pixels corresponding to only that portion of an area of each of said plurality of digitized optical sections defined by said outline and not overlapped by another digitized optical section, as viewed from a top of said reconstruction; and iii) assigning each of said pixels of said digitized optical sections an X, Y, and Z coordinate, wherein said X and Y coordinates correspond to said pixel location, and for each of said pixels lying on an outline of a digitized optical sections said Z coordinate is assigned said height of said digitized optical section, and for each of said pixels within any of said outlines of said plurality of digitized optical sections said Z coordinate is assigned a height based on a weighted average; b) creating a bottom wrap, said steps of creating said bottom wrap comprising: i) assigning said plurality of digitized optical sections a height corresponding to said focal depth; ii) identifying pixels corresponding only that portion of an area of each of said plurality of digitized optical sections defined by said outline and not overlapped by another digitized optical section, as viewed from a bottom of said reconstruction; and iii) assigning each of said pixels of said digitized optical sections an X, Y,
and Z coordinate, wherein said X and Y coordinates correspond to said
pixel location, and for each of said pixels lying on an outline of a
digitized optical sections said Z coordinate is assigned said height of
said digitized optical section, and for each of said pixels within any of
said outlines of said plurality of digitized optical sections said Z
coordinate is assigned a height based on a weighted average;
c) dividing said top wrap and said bottom wrap into facets, wherein said facets
encompass the surface area of said top and said bottom wrap; and
d) joining said top wrap and said bottom wrap at a seam defined by the
intersection of said facets of said top wrap with said facets of said bottom
wrap.
14. The invention in accordance with claim 13 wherein said weighted average is
calculated by defining a plurality of rays extending from each of said pixels within
any of said outlines of said plurality of digitized optical sections to the nearest of said
outlines, and wherein the shortest of said rays are weighted more than the longest of
said rays.
15. The invention in accordance with claim 14 further comprising eight rays of lengths L,,
L2, L3, L4, L5, L6, L7, and L8 extending at 45° angles to the nearest eight outlines of
heights Hi, H2, H3, H4, H5, H6, H7, and H8, and said Z coordinate is defined by
Z = H l/LQ+H^l/L^+H^l/L.HH.fl/L^HH^l/L^+H l/L^+H^l/L^+H^l/Lg " (l L + (l/L2) + (LL3) + (l^4) +"(l/L5) + (l/L + (l7L7) ÷ (l/L8) ' " .
16. The invention in accordance with claim 13 further comprising: a) identifying a lateral indentation in said plurality of digitized optical sections at a maximum point of advance; b) subdividing said plurality of digitized optical sections at said maximum point of advance thereby creating a top partial wrap and a bottom partial wrap; and c) performing the steps of claim 13 on said top partial wrap and said bottom partial wrap.
17. The invention in accordance with claim 1 further comprising creating a slot by outlining the periphery of a section of said object, for each of said plurality of digitized optical sections.
18. A method for three dimensional elapsed time analysis of the motility and moφhology of a moving object, said method comprising: a) optically sectioning by differential interference contrast microscopy an object at a plurality of focal depths over a first period of time; b) windowing said plurality of optical sections whereby only a portion of interest of said plurality of optical sections is digitized; c) digitizing each of said plurality of optical sections with a frame grabber; d) identifying with each of said plurality of digitized optical sections a tag that allows identification of at least said time and said focal depth of each of said plurality of digitized optical sections; e) outlining the periphery of said object for each of said plurality of digitized optical sections, said outlining comprising: i) applying a complexity algorithm to each pixel of said digitized optical section thereby removing said pixels below said complexity threshold; ii) dilating each of said remaining pixels of said digitized optical section; iii) smoothing each of said remaining pixels of said digitized optical section; iv) applying a grayscale threshold to each of said remaining pixels thereby removing said pixels below said grayscale threshold; v) applying a minimum pixel group filter thereby removing any pixel groups below said minimum pixel group size; vi) applying a maximum pixel group filter thereby removing any pixel groups above said maximum pixel group size; and vii) eroding said remaining pixels of said digitized optical section a number of times equal to the number of said dilating steps plus the number of said smoothing steps; f) calculating from said plurality of digitized optical sections a plurality of parameters representing the motility and moφhology of said object; g) reconstructing from said plurality of digitized optical sections a three dimensional graphical elapsed time faceted image reconstruction of said object for computerized viewing, said reconstructing comprising: i) creating a top wrap, said steps of creating said top wrap comprising: a) assigning said plurality of digitized optical sections a height
corresponding to said focal depth;
b) identifying pixels corresponding only that portion of an area of
each of said plurality of digitized optical sections defined by
said outline and not overlapped by another digitized optical
section, as viewed from a top of said reconstruction; and
c) assigning each of said pixels of said digitized optical sections
an X, Y, and Z coordinate, wherein said X and Y coordinates
correspond to said pixel location, and for each of said pixels
lying on an outline of a digitized optical sections said Z
coordinate is assigned said height of said digitized optical
section, and for each of said pixels within any of said outlines
of said plurality of digitized optical sections said Z coordinate
is assigned a height based on a weighted average wherein said
weighted average is calculated by defining eight rays of lengths
L|, L2, L3, L4, L5, L6, L7, and L8 extending from each of said
pixels within any of said outlines at 45° angles to the nearest
eight outlines of heights H,, H2, H3, H4, H5, H6, H7, and H8, and
said Z coordinate is defined by
Z = H,(l/L1)+H,(l/L,)+H,(l/L,)+H,(l/L,)+H,(l/L,)+H.(l/L.)+H7(l/L7)+Hg(l/L8) (1/L,) + (1/L2) + (1/L3) + (1/L4) + (1/L5) + (1/L + (1/L7) + (1/L8) ;
ii) creating a bottom wrap, said steps of creating said bottom wrap
comprising: a) assigning said plurality of digitized optical sections a height
corresponding to said focal depth,
b) identifying pixels conespondmg only that portion of an area of
each of said plurality of digitized optical sections defined by
said outline and not overlapped by another digitized optical
section, as viewed from a bottom of said reconstruction, and
c) assigning each of said pixels of said digitized optical sections
an X, Y, and Z coordinate, wherein said X and Y coordinates
correspond to said pixel location, and for each of said pixels
lying on an outline of a digitized optical sections said Z
coordinate is assigned said height of said digitized optical
section, and for each of said pixels within any of said outlines
of said plurality of digitized optical sections said Z coordinate
is assigned a height based on a weighted average wherein said
weighted average is calculated by defining eight rays of lengths
L,, L2, L3, L4, L.,, L6, L7, and L8 extending from each of said
pixels within any of said outlines at 45° angles to the nearest
eight outlines of heights H,, H2, H3, H4, H5, H6, H7, and H8, and
said Z coordinate is defined by
Z = H,(l/L,)+H,(l/L2)+H3(l/L,)+H,(l/L,)+H?(l/L.)+H.(l/L.)+H7(l/L7)+H8(l/L8)
" (I/L,) + (I/L2) + (I L3) + (I/L4) +"(I/L" 5) + ("I/L6)"+ (I7L7) ÷ (I/L8) " " , iii) dividing said top wrap and said bottom wrap into facets, wherein said
80 facets encompass the surface area of said top and said bottom wrap; and iv) joining said top wrap and said bottom wrap at a seam defined by the intersection of said facets of said top wrap with said facets of said bottom wrap; 85 h) displaying said graphical representation with a pseudo-three-dimensional viewing means; i) creating a slot by outlining the periphery of a section of said object, for each of said plurality of digitized optical sections; and j) repeating said steps of said method for at least one other period of time.
19. An apparatus for three dimensional elapsed time analysis of the motility and moφhology of a moving object, said apparatus comprising: a) a microscope for optically sectioning a motile object at a plurality of focal depths over a first period of time; 5 b) a means for digitizing each of said plurality of digitized optical sections; c) a means for identifying each of said plurality of digitized optical sections with a tag that allows identification of said time and said focal depth of each of said plurality of digitized optical sections; d) a computer means for outlining the periphery of said object for each of said 10 plurality of digitized optical sections; e) a computer means for calculating from said plurality of digitized optical sections a plurality of parameters representing the motility and moφhology of said object; f) a computer means for reconstructing from said plurality of digitized optical sections a three dimensional graphical reconstruction of said object for computerized viewing; and g) a computer means for repeating said steps of said method for at least one other period of time.
20. The invention in accordance with claim 19 further comprising a stepper motor attached to said microscope.
21. The invention in accordance with claim 19 wherein said microscope utilizes digital interference contrast microscopy.
22. The invention in accordance with claim 19 wherein said means for digitizing each of said plurality of optical sections comprises a frame grabber.
23. The invention in accordance with claim 19 wherein said means for digitizing each of said plurality of optical sections comprises a digital camera.
24. The invention in accordance with claim 19 wherein said means for identifying each of said plurality of digitized optical sections with a tag that allows identification of said time and said focal depth of each of said plurality of digitized optical sections comprises a character generator.
25. The invention in accordance with claim 19 wherein said computer means comprises a single general puφose computer.
26. The invention in accordance with claim 19 further comprising: a) a first distribution workstation computer for receiving said digitized optical sections, said time, and said focal depth; b) a plurality of parallel processor computers for outlining the periphery of said object for each of said plurality of digitized optical sections; and c) an integration workstation computer for integrating said outlines, calculating said parameters, and for reconstructing said three dimensional graphical representation of said object.
PCT/US1999/013193 1998-06-25 1999-06-10 Three dimensional dynamic image analysis system WO1999067739A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU44361/99A AU4436199A (en) 1998-06-25 1999-06-10 Three dimensional dynamic image analysis system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10451898A 1998-06-25 1998-06-25
US09/104,518 1998-06-25

Publications (1)

Publication Number Publication Date
WO1999067739A1 true WO1999067739A1 (en) 1999-12-29

Family

ID=22300917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/013193 WO1999067739A1 (en) 1998-06-25 1999-06-10 Three dimensional dynamic image analysis system

Country Status (2)

Country Link
AU (1) AU4436199A (en)
WO (1) WO1999067739A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6615141B1 (en) 1999-05-14 2003-09-02 Cytokinetics, Inc. Database system for predictive cellular bioinformatics
WO2003088150A1 (en) * 2002-04-09 2003-10-23 University Of Iowa Research Foundation Reconstruction and motion analysis of an embryo
US6651008B1 (en) 1999-05-14 2003-11-18 Cytokinetics, Inc. Database system including computer code for predictive cellular bioinformatics
US6956961B2 (en) 2001-02-20 2005-10-18 Cytokinetics, Inc. Extracting shape information contained in cell images
US6999607B2 (en) 2001-02-20 2006-02-14 Cytokinetics, Inc. Method and apparatus for automated cellular bioinformatics
US7151847B2 (en) 2001-02-20 2006-12-19 Cytokinetics, Inc. Image analysis of the golgi complex
US7218764B2 (en) 2000-12-04 2007-05-15 Cytokinetics, Inc. Ploidy classification method
US7235353B2 (en) 2003-07-18 2007-06-26 Cytokinetics, Inc. Predicting hepatotoxicity using cell based assays
US7246012B2 (en) 2003-07-18 2007-07-17 Cytokinetics, Inc. Characterizing biological stimuli by response curves
EP1811017A1 (en) * 2004-11-09 2007-07-25 Hitachi Medical Corporation Cell cultivating device, image processing device and cell detecting system
US7323318B2 (en) 2004-07-15 2008-01-29 Cytokinetics, Inc. Assay for distinguishing live and dead cells
CN100429551C (en) * 2005-06-16 2008-10-29 武汉理工大学 Composing method for large full-scene depth picture under microscope
US7657076B2 (en) 2001-02-20 2010-02-02 Cytokinetics, Inc. Characterizing biological stimuli by response curves
US7817840B2 (en) 2003-07-18 2010-10-19 Cytokinetics, Inc. Predicting hepatotoxicity using cell based assays
CN110389127A (en) * 2019-07-03 2019-10-29 浙江大学 A kind of identification of cermet part and surface defects detection system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4584704A (en) * 1984-03-01 1986-04-22 Bran Ferren Spatial imaging system
US5740266A (en) * 1994-04-15 1998-04-14 Base Ten Systems, Inc. Image processing system and method
US5805742A (en) * 1995-08-16 1998-09-08 Trw Inc. Object detection system with minimum-spanning gradient filter for scene clutter suppression

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4584704A (en) * 1984-03-01 1986-04-22 Bran Ferren Spatial imaging system
US5740266A (en) * 1994-04-15 1998-04-14 Base Ten Systems, Inc. Image processing system and method
US5805742A (en) * 1995-08-16 1998-09-08 Trw Inc. Object detection system with minimum-spanning gradient filter for scene clutter suppression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SOLL D R: "THE USE COMPUTERS IN UNDERSTANDING HOW ANIMAL CELLS CRAWL", INTERNATIONAL REVIEW OF CYTOLOGY., ACADEMIC PRESS, NEW YORK, US, vol. 163, 1 January 1995 (1995-01-01), US, pages 43 - 104, XP002924413, ISSN: 0074-7596 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631331B1 (en) 1999-05-14 2003-10-07 Cytokinetics, Inc. Database system for predictive cellular bioinformatics
US6615141B1 (en) 1999-05-14 2003-09-02 Cytokinetics, Inc. Database system for predictive cellular bioinformatics
US6651008B1 (en) 1999-05-14 2003-11-18 Cytokinetics, Inc. Database system including computer code for predictive cellular bioinformatics
US6738716B1 (en) 1999-05-14 2004-05-18 Cytokinetics, Inc. Database system for predictive cellular bioinformatics
US6743576B1 (en) 1999-05-14 2004-06-01 Cytokinetics, Inc. Database system for predictive cellular bioinformatics
US7218764B2 (en) 2000-12-04 2007-05-15 Cytokinetics, Inc. Ploidy classification method
US7657076B2 (en) 2001-02-20 2010-02-02 Cytokinetics, Inc. Characterizing biological stimuli by response curves
US7269278B2 (en) 2001-02-20 2007-09-11 Cytokinetics, Inc. Extracting shape information contained in cell images
US7151847B2 (en) 2001-02-20 2006-12-19 Cytokinetics, Inc. Image analysis of the golgi complex
US6956961B2 (en) 2001-02-20 2005-10-18 Cytokinetics, Inc. Extracting shape information contained in cell images
US6999607B2 (en) 2001-02-20 2006-02-14 Cytokinetics, Inc. Method and apparatus for automated cellular bioinformatics
US7194124B2 (en) 2002-04-09 2007-03-20 University Of Iowa Research Foundation Reconstruction and motion analysis of an embryo
WO2003088150A1 (en) * 2002-04-09 2003-10-23 University Of Iowa Research Foundation Reconstruction and motion analysis of an embryo
US7817840B2 (en) 2003-07-18 2010-10-19 Cytokinetics, Inc. Predicting hepatotoxicity using cell based assays
US7246012B2 (en) 2003-07-18 2007-07-17 Cytokinetics, Inc. Characterizing biological stimuli by response curves
US7235353B2 (en) 2003-07-18 2007-06-26 Cytokinetics, Inc. Predicting hepatotoxicity using cell based assays
US7323318B2 (en) 2004-07-15 2008-01-29 Cytokinetics, Inc. Assay for distinguishing live and dead cells
EP1811017A1 (en) * 2004-11-09 2007-07-25 Hitachi Medical Corporation Cell cultivating device, image processing device and cell detecting system
JPWO2006051813A1 (en) * 2004-11-09 2008-05-29 株式会社日立メディコ Cell culture device, image processing device, and cell detection system
EP1811017A4 (en) * 2004-11-09 2010-12-01 Kaneka Corp Cell cultivating device, image processing device and cell detecting system
US8064661B2 (en) 2004-11-09 2011-11-22 Kaneka Corporation Cell culture device, image processing device and cell detecting system
CN101048492B (en) * 2004-11-09 2013-01-09 株式会社钟化 Cell cultivating device, image processing device and cell detecting system
CN100429551C (en) * 2005-06-16 2008-10-29 武汉理工大学 Composing method for large full-scene depth picture under microscope
CN110389127A (en) * 2019-07-03 2019-10-29 浙江大学 A kind of identification of cermet part and surface defects detection system and method

Also Published As

Publication number Publication date
AU4436199A (en) 2000-01-10

Similar Documents

Publication Publication Date Title
WO1999067739A1 (en) Three dimensional dynamic image analysis system
US7194124B2 (en) Reconstruction and motion analysis of an embryo
US6867772B2 (en) 3D computer modelling apparatus
US5751852A (en) Image structure map data structure for spatially indexing an imgage
Vaquero et al. A survey of image retargeting techniques
Potmesil Generating octree models of 3D objects from their silhouettes in a sequence of images
US5809179A (en) Producing a rendered image version of an original image using an image structure map representation of the image
EP1953701B1 (en) Hybrid volume rendering in computer implemented animation
US7684622B2 (en) Method, system and program product for representing a perceptual organization of an image
US8248410B2 (en) Synthesizing detailed depth maps from images
US9013499B2 (en) Methods and apparatus for multiple texture map storage and filtering including irregular texture maps
EP1687777A2 (en) Method and system for distinguishing surfaces in 3d data sets (&#34;dividing voxels&#34;)
Kiess et al. Seam carving with improved edge preservation
Price et al. Object-based vectorization for interactive image editing
CN110223376B (en) Three-dimensional particle reconstruction method based on single accumulated particle material image
US20050151734A1 (en) Method and apparatus for rendering, storing and editing voxel objects
EP1445736B1 (en) Method and system for providing a volumetric representation of a three-dimensional object
CN113066004A (en) Point cloud data processing method and device
Tanimoto Image data structures
Farella et al. Analysing key steps of the photogrammetric pipeline for Museum artefacts 3D digitisation
EP2118852B1 (en) Concept for synthesizing texture in a video sequence
Laycock et al. Exploring cultural heritage sites through space and time
Radig Image region extraction of moving objects
JP2005326173A (en) Sphericity calculation program
Alonso et al. Back-to-front ordering of triangles in digital terrain models over regular grids

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase