WO2014066231A1 - Cell identification method and device, and urine analyzer - Google Patents

Cell identification method and device, and urine analyzer Download PDF

Info

Publication number
WO2014066231A1
WO2014066231A1 PCT/US2013/065879 US2013065879W WO2014066231A1 WO 2014066231 A1 WO2014066231 A1 WO 2014066231A1 US 2013065879 W US2013065879 W US 2013065879W WO 2014066231 A1 WO2014066231 A1 WO 2014066231A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
filter
frequency
cell
cell identification
Prior art date
Application number
PCT/US2013/065879
Other languages
French (fr)
Inventor
Ying CHI
Zi Hua SU
Zhi Yuan ZHANG
Original Assignee
Siemens Healthcare Diagnostics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare Diagnostics Inc. filed Critical Siemens Healthcare Diagnostics Inc.
Publication of WO2014066231A1 publication Critical patent/WO2014066231A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/431Frequency domain transformation; Autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Definitions

  • the present invention relates to the technical field of cell identification. More particularly, it relates to a method and device capable of identifying red blood cells and white blood cells in urine sediment with greater accuracy and speed, and further to a urine analyzer comprising the device.
  • the urine of a normal person may contain very small amounts of red blood cells, white blood cells, epithelial cells, crystals and mucus strands, and in rare cases transparent casts. However, it is also possible for excessive amounts of red blood cells, abnormal epithelial cells, casts, bacteria, trichomonas, tumor cells and viral inclusion bodies to be present.
  • the urine sediment examinations we carry out are generally examinations aimed at quantifying the types of urine sediment mentioned above.
  • the object of a urine sediment examination is to identify various pathological components of urine such as cells, crystals, bacteria and parasites; urine sediment can generally reflect quite accurately the actual situation regarding cell components, casts, epithelial cells and crystals in urine. Therefore urine sediment testing is an important routine test item which assists in diagnosing, locating and distinguishing urinary system diseases and making prognoses for them. In cases where pathological changes cannot be found in ordinary examinations of properties or chemical tests, minute changes can be discerned by means of a sediment examination .
  • Urine sediment examination indices generally include testing for red blood cells and white blood cells, etc.
  • the background of microscope images introduces noise, while wide differences are apparent in cell size, shape and texture.
  • Fig. 1 shows examples of different classes of object requiring identification. It can be seen from Fig. 1 that within each class there are several more groups of cells or particles. For instance, the class of red blood cells further comprises four particular forms. For this reason, the identification of red blood cells (red blood corpuscles) and white blood cells (white blood corpuscles) in urine sediment is a difficult task.
  • the original image is first segmented to extract target objects (e.g. cell to be identified) .
  • Cells are then classified by way of feature extraction.
  • Common segmentation methods include a Sobel, Robert or Canny kernel used in combination with an active contour or level set method.
  • active contour and level set methods are extremely time-consuming due to the iterative curve evolution step, and neither these contour methods nor the Sobel, Robert or Canny kernel is able to remove defocusing
  • Complicated methods include SIFT (scale-invariant feature transform) and local grayscale invariant methods, which are extremely time-consuming because they involve derivative of Gaussian (DOG) scale space construction.
  • DOG Gaussian
  • haar feature (Adaboost training) method is comparatively simple in theory, the training processing thereof is extremely time- consuming because it only uses simple features (e.g. rectangular feature) .
  • a cell identification method comprising the following steps:
  • an image acquisition step for acquiring an original image
  • a defocusing interference removal step for transforming the original image to the frequency domain, acquiring image high-frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a denoised high-frequency edge image comprising only high-frequency edges ;
  • a segmentation step for subjecting the high- frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane; and
  • a classification step for calculating multiple features for each cell, and classifying each target on the basis of the multiple features.
  • the first filter is a logGabor filter
  • the second filter is a complex-valued monogenic filter
  • the transfer function of the logGabor filter is
  • the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co ⁇ occurrence matrix homogeneity property, gray-level co ⁇ occurrence matrix energy property, and mutual information between a target object image and an average template.
  • phase feature pf is obtained by the following formula:
  • F (k) is the result of the original image undergoing a Fourier transform.
  • a cell identification device comprising :
  • an image acquisition unit for acquiring an original image
  • a defocusing interference removal unit for subjecting the original image to denoising processing to obtain a high-frequency edge image, and comprising:
  • a Fourier transform component for transforming the original image to the frequency domain
  • a first filter for acquiring image high- frequency information
  • a second filter for acquiring image edge information
  • an image energy extraction component for extracting image energy, so as to obtain a denoised high- frequency edge image comprising only high-frequency edges
  • a segmentation unit for subjecting the high- frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane;
  • a classification unit for calculating multiple features for each cell, and classifying each target on the basis of the multiple features.
  • the first filter is a logGabor filter
  • the second filter is a complex-valued monogenic filter
  • the transfer function of the logGabor filter is
  • the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co-occurrence matrix homogeneity property, gray-level co-occurrence matrix energy property, and mutual information between a target object image and an average template.
  • the mutual information between the target object image and the average template is obtained by matching of phase features, and the phase feature pf is obtained by the following formula:
  • F (k) is the result of the original image undergoing a Fourier transform.
  • a urine analyzer comprising any one of the above cell identification devices.
  • adaboost training processing is accelerated by first eliminating defocusing interference in the image background, and then using a set of excellent features; since defocusing interference is removed from the original image background before segmentation, a good foundation is laid for subsequent processing. Furthermore, in the method for extracting features of red blood cells and white blood cells, the present invention proposes a set of new combined features, thereby enabling genuine (but not typical) red blood cells and white blood cells to be distinguished more effectively from amongst urine sediment objects.
  • Fig. 1 shows examples of particular forms of red blood cells, white blood cells and crystals.
  • Fig. 2 is a flow chart showing the procedure of the cell identification method according to the embodiments of the present invention.
  • Fig. 3 shows an example of an original image.
  • Fig. 4 shows an image obtained by subjecting the original image to denoising and segmentation processing.
  • Figs. 5A - 5F show 3 types of bowl-shaped red blood cells and their corresponding phase features.
  • Fig. 6 is a block diagram showing the configuration of the cell identification device according to the embodiments of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS
  • Fig. 2 is a flow chart showing the procedure of the cell identification method according to the embodiments of the present invention. As Fig. 2 shows, the cell identification method comprises the following steps:
  • step S201 an original image f (x) is acquired.
  • Fig. 3 shows an example of an original image so acquired. It can be seen from Fig. 3 that very strong defocusing (fuzzy) noise is present in the original image.
  • step S202 the original image is transformed to the frequency domain by means of a Fourier transform.
  • the 2-dimensional Fourier transform F (k) of the original image f (x) is obtained by the following formula:
  • step S203 image high-frequency information is acquired by means of a first filter.
  • step S204 image edge information is acquired by means of a second filter.
  • step S205 an inverse transformation to the time domain is performed.
  • step S206 image energy is extracted, so as to obtain a denoised, high-frequency edge image comprising only high-frequency edges.
  • the first filter may be a logGabor filter (also called a Log-Gabor filter) , the transfer function thereof being as follows:
  • is frequency
  • is the center frequency of the logGabor filter
  • is a constant. It must be pointed out that the value of ⁇ should be chosen so as to keep the value of ⁇ / ⁇ constant. For instance, when the value of ⁇ / ⁇ is 0.74, 0.55 and 0.41, the bandwidth of the logGabor filter is approximately 1, 2 and 3 octaves, respectively.
  • the second filter may for example be a complex-valued monogenic filter.
  • the transfer function of the complex-valued monogenic filter is:
  • the steps of inverse-transforming the image back to the time domain and extracting image energy specifically comprise : [ 0062 ] (i) calculating the original image restored to the time domain after undergoing wavelet filtering
  • step S207 the high-frequency edge image is subjected to Gaussian blur processing; a suitable threshold is then chosen to perform binarization, and a cell area bounded by edges is marked; cell detail information covered by the marked area is retrieved from the original image, so as to remove interference from noise points outside the focal plane.
  • Fig. 4 shows an image obtained by subjecting the original image to denoising and segmentation processing. It can be seen from Fig. 4 that defocusing noise has been completely eliminated from the original image, leaving behind only the objects of interest.
  • step S208 multiple features are calculated for each cell, and each target is classified on the basis of these multiple features.
  • Adaboost is taken as an example of a classification method.
  • those skilled in the art should appreciate that other classification methods are also possible.
  • Extraction of features of red blood cells and white blood cells for the construction of a training cluster comprises: using basic image object properties, such as area, circularity, rectangularity, low image brightness to area ratio, gray-level co-occurrence matrix properties (mainly constrast, homogeneity and energy) and mutual information about average templates of a small set (a normal red blood cell average template, a wrinkled red blood cell average template and a white blood cell average template) to distinguish red blood cells and white blood cells from objects of all other types in the urine sediment.
  • basic image object properties such as area, circularity, rectangularity, low image brightness to area ratio, gray-level co-occurrence matrix properties (mainly constrast, homogeneity and energy) and mutual information about average templates of a small set (a normal red blood cell average template, a wrinkled red blood cell average template and a white blood cell average template) to distinguish red blood cells and white blood cells from objects of all other types in the urine sediment.
  • gray-level co-occurrence matrix properties mainly constrast, homogeneity and energy
  • S is the area of a cell or particle
  • L is the diameter thereof.
  • the object area is used to separate small cells (such as red blood cells, white blood cells and crystals) from minute single yeast cells and large cells (such as epithelial cells and casts) .
  • small cells such as red blood cells, white blood cells and crystals
  • large cells such as epithelial cells and casts
  • rectangularity may be used to further distinguish square crystals from round crystals.
  • the rectangularity R is calculated by the following formula :
  • W is the width of an object
  • H is the height thereof.
  • P(x,y) is the junction point probability.
  • P(x,y) is the number of times x and y appear at the same time divided by the total number of points (samples) in the image;
  • P (x) is the number of times x appears divided by the total number of points (samples) in the image;
  • P (y) is the number of times y appears divided by the total number of points (samples) in the image. If X and Y are unrelated, the value of MI(X;Y) is 0.
  • phase feature is a solid texture feature unrelated to strong luminance from different directions.
  • Figs. 5A - 5F show 3 types of bowl-shaped red blood cells and their corresponding phase features. It can be seen from Fig. 5A that when the intensity of illumination varies, the cell edges may be clear
  • Fig. 5D shows the phase feature corresponding to the cell in Fig. 5A. It can be seen from Fig. 5D that in comparison, the intensity of illumination has no bearing on whether the cell edges are clear. Therefore the precision of matching can be increased effectively if phase features are matched to obtain mutual information.
  • a phase feature pf can be further extracted from the result obtained in steps S203 - S206.
  • texture information based on gray-level co ⁇ occurrence matrix properties mainly comprising the following parameters is used to further distinguish between round crystals and other cells of similar size (mainly red blood cells and white blood cells) .
  • the gray-level co-occurrence matrix contrast is calculated by the following formula:
  • Contrast ⁇ i - j ⁇ 2 P(i, j)
  • P(i,j) is the probability of the gray-level co-occurrence matrix.
  • the gray-level co-occurrence matrix homogeneity is calculated by the following formula:
  • the gray-level co-occurrence matrix energy property is calculated by the following formula:
  • Fig. 6 is a block diagram showing the configuration of the cell identification device according to the embodiments of the present invention.
  • the cell identification device 600 comprises an image acquisition unit 601, a defocusing interference removal unit 602, a segmentation unit 603 and a classification unit 604.
  • the image acquisition unit 601 acquires an original image, which it then supplies to the defocusing interference removal unit 602.
  • the defocusing interference removal unit 602 subjects the original image to denoising processing, to obtain a high-frequency edge image.
  • the defocusing interference removal unit 602 further comprises the following components: a Fourier transform component 6021, for transforming the original image to the frequency domain; a first filter 6022, for acquiring image high-frequency information; a second filter 6023, for acquiring image edge information; a Fourier inverse transform component 6024, for inverse-transforming the filtered image to the time domain; an image energy extraction component 6025, for extracting image energy in order to obtain a denoised, high-frequency edge image comprising only high-frequency edges.
  • the segmentation unit 603 subjects the high- frequency edge image to Gaussian blur processing, then chooses a suitable threshold to perform binarization, marks out a cell area bounded by edges, and retrieves cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane.
  • the classification unit 604 calculates multiple features for each cell, and classifies each target on the basis of the multiple features.
  • the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co-occurrence matrix homogeneity property, gray-level co-occurrence matrix energy property, and mutual information between a target object image and an average template.
  • the first filter is a logGabor filter
  • the second filter is a complex-valued monogenic filter.
  • the mutual information between the target object image and the average template is obtained by matching of phase features.
  • a urine analyzer is provided, comprising any one of the above cell identification devices.
  • the cell identification method comprises the following steps: an image acquisition step, for acquiring an original image; a defocusing interference removal step, for transforming the original image to the frequency domain, acquiring image high- frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a denoised image comprising only high-frequency edges; a segmentation step, for subjecting the high-frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane; and a classification step, for calculating multiple features for each cell, and classifying each target on the basis of the multiple features.
  • denoising can be performed before segmentation.

Abstract

Disclosed in the present application are a cell identification method and device, and a urine analyzer. The cell identification method comprises the following steps: an image acquisition step, for acquiring an original image; a defocusing interference removal step, for transforming the original image to the frequency domain, acquiring image high-frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a denoised image comprising only high-frequency edges.

Description

CELL IDENTIFICATION METHOD
AND DEVICE, AND URINE ANALYZER
PRIORITY STATEMENT
[001] This application claims benefit under 35 U.S.C. §119 of Chinese Patent Application Number CN 201210418733.8 filed October 26, 2012, the entire contents of which are hereby incorporated herein by reference.
TECHNICAL FIELD
[002] The present invention relates to the technical field of cell identification. More particularly, it relates to a method and device capable of identifying red blood cells and white blood cells in urine sediment with greater accuracy and speed, and further to a urine analyzer comprising the device.
BACKGROUND OF THE INVENTION
[003] The urine of a normal person may contain very small amounts of red blood cells, white blood cells, epithelial cells, crystals and mucus strands, and in rare cases transparent casts. However, it is also possible for excessive amounts of red blood cells, abnormal epithelial cells, casts, bacteria, trichomonas, tumor cells and viral inclusion bodies to be present.
[004] The urine sediment examinations we carry out are generally examinations aimed at quantifying the types of urine sediment mentioned above. The object of a urine sediment examination is to identify various pathological components of urine such as cells, crystals, bacteria and parasites; urine sediment can generally reflect quite accurately the actual situation regarding cell components, casts, epithelial cells and crystals in urine. Therefore urine sediment testing is an important routine test item which assists in diagnosing, locating and distinguishing urinary system diseases and making prognoses for them. In cases where pathological changes cannot be found in ordinary examinations of properties or chemical tests, minute changes can be discerned by means of a sediment examination .
[005] Urine sediment examination indices generally include testing for red blood cells and white blood cells, etc. However, in practice, the background of microscope images introduces noise, while wide differences are apparent in cell size, shape and texture. Fig. 1 shows examples of different classes of object requiring identification. It can be seen from Fig. 1 that within each class there are several more groups of cells or particles. For instance, the class of red blood cells further comprises four particular forms. For this reason, the identification of red blood cells (red blood corpuscles) and white blood cells (white blood corpuscles) in urine sediment is a difficult task.
[006] In the prior art, the original image is first segmented to extract target objects (e.g. cell to be identified) . Cells are then classified by way of feature extraction. Common segmentation methods include a Sobel, Robert or Canny kernel used in combination with an active contour or level set method. However, active contour and level set methods are extremely time-consuming due to the iterative curve evolution step, and neither these contour methods nor the Sobel, Robert or Canny kernel is able to remove defocusing
(fuzzy) noise prior to segmentation.
[007] Furthermore, the feature extraction methods (used for training) in the prior art are extremely time-consuming. Complicated methods include SIFT (scale-invariant feature transform) and local grayscale invariant methods, which are extremely time-consuming because they involve derivative of Gaussian (DOG) scale space construction. Although the haar feature (Adaboost training) method is comparatively simple in theory, the training processing thereof is extremely time- consuming because it only uses simple features (e.g. rectangular feature) .
[008] In addition, the feature extraction methods in the prior art are only suitable for processing typical data, not actual data.
[009] In view of the above, there is a need for a new combination of methods for denoising prior to segmentation and performing feature extraction before Adaboost.
SUMMARY OF THE INVENTION
[0010] In view of the above, an object of the present invention is to propose a new cell identification method, to enable more accurate and faster identification of red blood cells and white blood cells in urine sediment. Another object of the present invention is to propose a new cell identification device, to enable more accurate and faster identification of red blood cells and white blood cells in urine sediment. Another object of the present invention is to propose a urine analyzer comprising the above cell identification device.
[0011] According to one aspect of the present invention, a cell identification method is provided, comprising the following steps:
[0012] an image acquisition step, for acquiring an original image ; [0013] a defocusing interference removal step, for transforming the original image to the frequency domain, acquiring image high-frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a denoised high-frequency edge image comprising only high-frequency edges ;
[0014] a segmentation step, for subjecting the high- frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane; and
[0015] a classification step, for calculating multiple features for each cell, and classifying each target on the basis of the multiple features.
[0016] Preferably, in the cell identification method according to the embodiments of the present invention,
[0017] the first filter is a logGabor filter, the second filter is a complex-valued monogenic filter, and the transfer function of the logGabor filter is
Figure imgf000006_0001
2(log(/?/ «0))
[0018] wherein ωθ is the center frequency of the logGabor filter, and β is a constant, [0019] the transfer function of the complex-valued monogenic filter is
(j *ul -u2)
~ /
[0020] wherein ul and u2 are the horizontal coordinate and vertical coordinate, respectively, in the frequency space, and
Figure imgf000007_0001
[0021] Preferably, in the cell identification method of the embodiments of the present invention,
[0022] the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co¬ occurrence matrix homogeneity property, gray-level co¬ occurrence matrix energy property, and mutual information between a target object image and an average template.
[0023] Preferably, in the cell identification method of the embodiments of the present invention,
[0024] the mutual information between the target object image and the average template is obtained by matching of phase features, and the phase feature pf is obtained by the following formula:
. f lloo2 Gabor
arctan- J(real(h))2 + (imagin(h)) = f (F(k) *log Gabor)e2Mkxdx
[0025] wherein h = (F(k) *log Gabor * H)e2Mkxdx
and F (k) is the result of the original image undergoing a Fourier transform.
[0026] According to another aspect of the present invention, a cell identification device is provided, comprising :
[0027] an image acquisition unit, for acquiring an original image ;
[0028] a defocusing interference removal unit, for subjecting the original image to denoising processing to obtain a high-frequency edge image, and comprising:
[0029] a Fourier transform component, for transforming the original image to the frequency domain;
[0030] a first filter, for acquiring image high- frequency information;
[0031] a second filter, for acquiring image edge information;
[0032] an inverse Fourier transform component, for inverse-transforming the filtered image to the time domain; and
[0033] an image energy extraction component, for extracting image energy, so as to obtain a denoised high- frequency edge image comprising only high-frequency edges; [0034] a segmentation unit, for subjecting the high- frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane;
[0035] a classification unit, for calculating multiple features for each cell, and classifying each target on the basis of the multiple features.
[0036] Preferably, in the cell identification device of the embodiments of the present invention, the first filter is a logGabor filter, the second filter is a complex-valued monogenic filter, and the transfer function of the logGabor filter is
- (log(fi?/ fi¾))
g(«) = exp
2(log(/?/ «0)):
[0037] wherein ωθ is the center frequency of the logGabor filter, and β is a constant,
[0038] the transfer function of the complex-valued monogenic filter is
Figure imgf000009_0001
[0039] wherein ul and u2 are the horizontal coordinate and v rtical coordinate, respectively, in the frequency space, and
Figure imgf000009_0002
[0040] Preferably, in the cell identification device of the embodiments of the present invention, the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co-occurrence matrix homogeneity property, gray-level co-occurrence matrix energy property, and mutual information between a target object image and an average template.
[0041] Preferably, in the cell identification device of the embodiments of the present invention, the mutual information between the target object image and the average template is obtained by matching of phase features, and the phase feature pf is obtained by the following formula:
. f lloog Gabor
arctan
■>J(real(h))2 + (imagin(h))2
r flogGabor = £ (F(k) * log Gabor)e2akxdx
[0042] wherein h = (F(k) * log Gabor * H)e2Mkxdx
and F (k) is the result of the original image undergoing a Fourier transform.
[0043] According to another aspect of the present invention, a urine analyzer is provided, comprising any one of the above cell identification devices.
[0044] In the cell identification method and device according to the embodiments of the present invention, adaboost training processing is accelerated by first eliminating defocusing interference in the image background, and then using a set of excellent features; since defocusing interference is removed from the original image background before segmentation, a good foundation is laid for subsequent processing. Furthermore, in the method for extracting features of red blood cells and white blood cells, the present invention proposes a set of new combined features, thereby enabling genuine (but not typical) red blood cells and white blood cells to be distinguished more effectively from amongst urine sediment objects.
BRIEF DESCRIPTION OF THE DRAWINGS
[0045] Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings, to give those skilled in the art a clearer understanding of the above and other features and advantages of the present invention. In the drawings:
[0046] Fig. 1 shows examples of particular forms of red blood cells, white blood cells and crystals.
[0047] Fig. 2 is a flow chart showing the procedure of the cell identification method according to the embodiments of the present invention.
[0048] Fig. 3 shows an example of an original image.
[0049] Fig. 4 shows an image obtained by subjecting the original image to denoising and segmentation processing.
[0050] Figs. 5A - 5F show 3 types of bowl-shaped red blood cells and their corresponding phase features.
[0051] Fig. 6 is a block diagram showing the configuration of the cell identification device according to the embodiments of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS
[ 0052 ] The present invention is illustrated in further detail below by way of embodiments, to clarify the object, technical solution and advantages of the present invention.
[ 0053 ] First of all, refer to Fig. 2, which describes the cell identification method according to the embodiments of the present invention. Fig. 2 is a flow chart showing the procedure of the cell identification method according to the embodiments of the present invention. As Fig. 2 shows, the cell identification method comprises the following steps:
[ 0054 ] First of all, in step S201, an original image f (x) is acquired. Fig. 3 shows an example of an original image so acquired. It can be seen from Fig. 3 that very strong defocusing (fuzzy) noise is present in the original image.
[ 0055 ] Next, in step S202, the original image is transformed to the frequency domain by means of a Fourier transform. The 2-dimensional Fourier transform F (k) of the original image f (x) is obtained by the following formula:
F(k) = f f(x)e-2*ikxdx
[ 0056] Next, in step S203, image high-frequency information is acquired by means of a first filter. Next, in step S204, image edge information is acquired by means of a second filter. Next, in step S205, an inverse transformation to the time domain is performed. Next, in step S206, image energy is extracted, so as to obtain a denoised, high-frequency edge image comprising only high-frequency edges. [0057] For example, the first filter may be a logGabor filter (also called a Log-Gabor filter) , the transfer function thereof being as follows:
- (log(fi?/ fi¾ ))
log Gabor = exp
2(log(/?/ «0 )):
[0058] wherein ω is frequency, ωθ is the center frequency of the logGabor filter, and β is a constant. It must be pointed out that the value of β should be chosen so as to keep the value of β/ωθ constant. For instance, when the value of β/ωθ is 0.74, 0.55 and 0.41, the bandwidth of the logGabor filter is approximately 1, 2 and 3 octaves, respectively.
[0059] In addition, the second filter may for example be a complex-valued monogenic filter. A complex-valued monogenic filter is constructed in the frequency domain by combining two monogenic filters (with transfer functions HI = j*ul/co and H2 = j*u2/(>, respectively) to form a complex-valued equation, so as to lower calculation costs. The transfer function of the complex-valued monogenic filter is:
(j *ul -u2)
~ /
wl2 + u22
[0060] wherein ul and u2 are horizontal and vertical coordinates in the frequency space, i.e. [ul,u2] = meshgrid (xrange, yrange) , and OJ— ^ul + u2 _
[0061] The steps of inverse-transforming the image back to the time domain and extracting image energy specifically comprise : [ 0062 ] (i) calculating the original image restored to the time domain after undergoing wavelet filtering
= (F(k) * log Gabor)e2iikxdx
[ 0063 ] (ii) calculating high-frequency edge energy
Energy = f0 2 &Gabor + (real(h))2 + (imagine(h))2
[ 0064 ]
( (Jfc) * log Gabor * H)elmkxdx
[ 0065 ] wherein
[ 0066 ] Next, in step S207, the high-frequency edge image is subjected to Gaussian blur processing; a suitable threshold is then chosen to perform binarization, and a cell area bounded by edges is marked; cell detail information covered by the marked area is retrieved from the original image, so as to remove interference from noise points outside the focal plane. Fig. 4 shows an image obtained by subjecting the original image to denoising and segmentation processing. It can be seen from Fig. 4 that defocusing noise has been completely eliminated from the original image, leaving behind only the objects of interest.
[ 0067 ] Next, in step S208, multiple features are calculated for each cell, and each target is classified on the basis of these multiple features. For example, in the present invention, Adaboost is taken as an example of a classification method. However, those skilled in the art should appreciate that other classification methods are also possible.
[ 0068 ] Extraction of features of red blood cells and white blood cells for the construction of a training cluster comprises: using basic image object properties, such as area, circularity, rectangularity, low image brightness to area ratio, gray-level co-occurrence matrix properties (mainly constrast, homogeneity and energy) and mutual information about average templates of a small set (a normal red blood cell average template, a wrinkled red blood cell average template and a white blood cell average template) to distinguish red blood cells and white blood cells from objects of all other types in the urine sediment. It must be explained that the best classification result can be achieved by using the multiple features given above. Of course, this is merely the most preferred embodiment. It is also possible to use a portion of the multiple features given above, even though it may not be possible to achieve the best result therewith. As mentioned in the background art section, feature extraction methods in the prior art cannot work, or cannot work effectively. Therefore a newly composed method is proposed herein to extract the most useful object features for subsequent Adaboost training. First of all, circularity is used to distinguish cell groups from single cells. The circularity C is calculated by the following formula:
[0069] wherein S is the area of a cell or particle, and L is the diameter thereof.
[0070] Secondly, the object area is used to separate small cells (such as red blood cells, white blood cells and crystals) from minute single yeast cells and large cells (such as epithelial cells and casts) . Next, rectangularity may be used to further distinguish square crystals from round crystals. The rectangularity R is calculated by the following formula :
R
W x H
[0071] wherein W is the width of an object, and H is the height thereof.
[0072] Next, mutual information about average templates is the most effective means of distinguishing between red blood cells and white blood cells of similar size. The mutual information MI between a target object image X and average template Y is calculated by the following formula: /(X;y) =∑∑P(*, ;y)log ( P(x, y)
xeX yeY P(x)P(y)
[0073] wherein P(x,y) is the junction point probability. For ease of understanding, P(x,y) is the number of times x and y appear at the same time divided by the total number of points (samples) in the image; P (x) is the number of times x appears divided by the total number of points (samples) in the image; and P (y) is the number of times y appears divided by the total number of points (samples) in the image. If X and Y are unrelated, the value of MI(X;Y) is 0.
[0074] It must be explained here that we have found that the phase feature is a solid texture feature unrelated to strong luminance from different directions. Figs. 5A - 5F show 3 types of bowl-shaped red blood cells and their corresponding phase features. It can be seen from Fig. 5A that when the intensity of illumination varies, the cell edges may be clear
(the left edge of the cell in Fig. 5A) or fuzzy (the right edge of the cell in Fig. 5A) . Fig. 5D shows the phase feature corresponding to the cell in Fig. 5A. It can be seen from Fig. 5D that in comparison, the intensity of illumination has no bearing on whether the cell edges are clear. Therefore the precision of matching can be increased effectively if phase features are matched to obtain mutual information.
[0075] According to a more preferable embodiment, a phase feature pf can be further extracted from the result obtained in steps S203 - S206.
pf = arctan
j(real(h))2 + (imagin(h))
[0076] Furthermore, since all the variables in this formula have already been calculated in steps S203 - S206, calculation costs are reduced.
[0077] Finally, texture information based on gray-level co¬ occurrence matrix properties mainly comprising the following parameters is used to further distinguish between round crystals and other cells of similar size (mainly red blood cells and white blood cells) .
[0078] The gray-level co-occurrence matrix contrast is calculated by the following formula:
Contrast =∑∑\ i - j \2 P(i, j)
(10)
[0079] wherein P(i,j) is the probability of the gray-level co-occurrence matrix. [0080] The gray-level co-occurrence matrix homogeneity is calculated by the following formula:
Figure imgf000018_0001
[0081] The gray-level co-occurrence matrix energy property is calculated by the following formula:
Energy =∑∑(P(i, ])) 2
(12)
[0082] The cell identification method according to the embodiments of the present invention has been described above in detail with reference to Figs. 1 to 5. The cell identification device according to the embodiments of the present invention will be described in detail below.
[0083] Fig. 6 is a block diagram showing the configuration of the cell identification device according to the embodiments of the present invention. As Fig. 6 shows, the cell identification device 600 comprises an image acquisition unit 601, a defocusing interference removal unit 602, a segmentation unit 603 and a classification unit 604.
[0084] The image acquisition unit 601 acquires an original image, which it then supplies to the defocusing interference removal unit 602.
[0085] The defocusing interference removal unit 602 subjects the original image to denoising processing, to obtain a high-frequency edge image. Specifically, the defocusing interference removal unit 602 further comprises the following components: a Fourier transform component 6021, for transforming the original image to the frequency domain; a first filter 6022, for acquiring image high-frequency information; a second filter 6023, for acquiring image edge information; a Fourier inverse transform component 6024, for inverse-transforming the filtered image to the time domain; an image energy extraction component 6025, for extracting image energy in order to obtain a denoised, high-frequency edge image comprising only high-frequency edges.
[0086] The segmentation unit 603 subjects the high- frequency edge image to Gaussian blur processing, then chooses a suitable threshold to perform binarization, marks out a cell area bounded by edges, and retrieves cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane.
[0087] The classification unit 604 calculates multiple features for each cell, and classifies each target on the basis of the multiple features. The multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co-occurrence matrix homogeneity property, gray-level co-occurrence matrix energy property, and mutual information between a target object image and an average template.
[0088] As stated above, the first filter is a logGabor filter, while the second filter is a complex-valued monogenic filter. Moreover, the mutual information between the target object image and the average template is obtained by matching of phase features. [0089] According to another embodiment of the present invention, a urine analyzer is provided, comprising any one of the above cell identification devices.
[0090] Disclosed in the present application are a cell identification method and device, and a urine analyzer. The cell identification method comprises the following steps: an image acquisition step, for acquiring an original image; a defocusing interference removal step, for transforming the original image to the frequency domain, acquiring image high- frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a denoised image comprising only high-frequency edges; a segmentation step, for subjecting the high-frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image, so as to remove interference from noise points outside the focal plane; and a classification step, for calculating multiple features for each cell, and classifying each target on the basis of the multiple features. According to the technical solution of the present application, denoising can be performed before segmentation.
[0091] The above embodiments are merely preferred embodiments of the present invention, and are by no means intended to limit it. Any modifications, equivalent substitutions or improvements etc. made without departing from the spirit and principles of the present invention should be included in the scope of protection thereof.

Claims

THE INVENTION CLAIMED IS :
1. A cell identification method, comprising the following steps :
an image acquisition step, for acquiring an original image ;
a defocusing interference removal step, for transforming the original image to the frequency domain, acquiring image high-frequency information by means of a first filter, acquiring image edge information by means of a second filter, and performing an inverse transformation to the time domain and extracting image energy, so as to obtain a high-frequency edge image;
a segmentation step, for subjecting the high-frequency edge image to Gaussian blur processing, then performing binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image; and
a classification step, for calculating multiple features for each cell, and classifying each cell on the basis of the multiple features.
2. The cell identification method as claimed in claim 1, wherein
the first filter is a logGabor filter, the second filter is a complex-valued monogenic filter, and the transfer function of the logGabor filter is
(log(«/ «0))
log Gabor = expl
2(log(/?/ «0))2
wherein ω is frequency, ω0 is the center frequency of the logGabor filter, and β is a constant,
the transfer function of the complex-valued monogenic filter is
(j *ul -u2)
~ /
wl2 + u22 wherein ul and u2 are the horizontal coordinate and rdinate, respectively, in the frequency space, and
Figure imgf000022_0001
3. The cell identification method as claimed in claim 1, wherein
the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co¬ occurrence matrix homogeneity property, gray-level co¬ occurrence matrix energy property, and mutual information between a target object image and an average template.
4. The cell identification method as claimed in claim 3, wherein
the mutual information between the target object image and the average template is obtained by matching of phase features, and the phase feature pf is obtained by the following formula:
pf = arctan
Figure imgf000022_0002
wherein flogGabor = f (F(k) * log Gabor]e2akx dx , h = (F(k) * log Gabor * H]e2akxdx , and F (k) is the result of the original image undergoing a Fourier transform.
5. A cell identification device, comprising:
an image acquisition unit, for acquiring an original image ;
a defocusing interference removal unit, for subjecting the original image to denoising processing to obtain a high- frequency edge image;
a segmentation unit, for subjecting the high-frequency edge image to Gaussian blur processing, then choosing a suitable threshold to perform binarization, marking out a cell area bounded by edges, and retrieving cell detail information covered by the marked area from the original image; a classification unit, for calculating multiple features for each cell, and classifying each cell on the basis of the multiple features.
6. The cell identification device as claimed in claim 5, wherein
the first filter is a logGabor filter, the second filter is a complex-valued monogenic filter, and the transfer function of the logGabor filter is
(log(fi>/ fi¾))
log Gabor = expl
2(log(/?/ «0))2
wherein ω is frequency, ω0 is the center frequency of the logGabor filter, and β is a constant,
the transfer function of the complex-valued monogenic filter is
(j *ul -u2)
~ /
wl2 + u22
wherein ul and u2 are the horizontal coordinate and vertical coordinate, respectively, in the frequency space, and
Figure imgf000023_0001
7. The cell identification device as claimed in claim 5, wherein
the multiple features comprise at least one of the following features: circularity, rectangularity, gray-level co-occurrence matrix contrast property, gray-level co¬ occurrence matrix homogeneity property, gray-level co¬ occurrence matrix energy property, and mutual information between a target object image and an average template.
8. The cell identification device as claimed in claim 7, wherein
the mutual information between the target object image and the average template is obtained by matching of phase features, and the phase feature pf is obtained by the following formula:
. J f logGabor
pj = arctan , ,
-<j(real(h))2 + (imagin(h))2 wherein flogGabor = f (F(k) * log Gabor)e2iikx dx , h = (F(k) * log Gabor * H]e2Mkxdx , and F (k) is the result of the original image undergoing a Fourier transform.
9. The cell identification device as claimed in claim 5, wherein the defocusing interference removal unit comprises: a Fourier transform component, for transforming the original image to the frequency domain;
a first filter, for acquiring image high-frequency information;
a second filter, for acquiring image edge information; an inverse Fourier transform component, for inverse- transforming the filtered image to the time domain; and
an image energy extraction component, for extracting image energy, so as to obtain a high-frequency edge image.
10. A urine analyzer, comprising the cell identification device as claimed in any one of claims 5 - 9.
PCT/US2013/065879 2012-10-26 2013-10-21 Cell identification method and device, and urine analyzer WO2014066231A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210418733.8A CN103793709A (en) 2012-10-26 2012-10-26 Cell recognition method and device, and urine analyzer
CN201210418733.8 2012-10-26

Publications (1)

Publication Number Publication Date
WO2014066231A1 true WO2014066231A1 (en) 2014-05-01

Family

ID=50545147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/065879 WO2014066231A1 (en) 2012-10-26 2013-10-21 Cell identification method and device, and urine analyzer

Country Status (2)

Country Link
CN (1) CN103793709A (en)
WO (1) WO2014066231A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379671A (en) * 2021-02-23 2021-09-10 华北电力大学 Partial discharge diagnosis system and diagnosis method for switch equipment
CN117593746A (en) * 2024-01-18 2024-02-23 武汉互创联合科技有限公司 Cell division balance evaluation system and device based on target detection
CN117593746B (en) * 2024-01-18 2024-04-19 武汉互创联合科技有限公司 Cell division balance evaluation system and device based on target detection

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760878A (en) * 2014-12-19 2016-07-13 西门子医疗保健诊断公司 Method and device for selecting urinary sediment microscope image with optimal focusing performance
CN107408197A (en) * 2015-03-11 2017-11-28 西门子公司 The system and method for the classification of cell image and video based on deconvolution network
CN110472472B (en) * 2019-05-30 2022-04-19 北京市遥感信息研究所 Airport detection method and device based on SAR remote sensing image
CN110415212A (en) * 2019-06-18 2019-11-05 平安科技(深圳)有限公司 Abnormal cell detection method, device and computer readable storage medium
CN112634338A (en) * 2020-12-30 2021-04-09 东北大学 Cerebrospinal fluid cell feature extraction method based on gray level co-occurrence matrix
CN115688028B (en) * 2023-01-05 2023-08-01 杭州华得森生物技术有限公司 Tumor cell growth state detection equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5978498A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for automated identification of cell groupings on a biological specimen
US20050240106A1 (en) * 2000-11-13 2005-10-27 Oravecz Michael G Frequency domain processing of scanning acoustic imaging signals
WO2012055543A1 (en) * 2010-10-26 2012-05-03 Technische Universität München Use of a two-dimensional analytical signal in sonography
US20120213418A1 (en) * 2006-09-15 2012-08-23 Identix Incorporated Multimodal ocular biometric system and methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175625A (en) * 2010-11-29 2011-09-07 樊潮 Method for identifying cancer cells

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5978498A (en) * 1994-09-20 1999-11-02 Neopath, Inc. Apparatus for automated identification of cell groupings on a biological specimen
US20050240106A1 (en) * 2000-11-13 2005-10-27 Oravecz Michael G Frequency domain processing of scanning acoustic imaging signals
US20120213418A1 (en) * 2006-09-15 2012-08-23 Identix Incorporated Multimodal ocular biometric system and methods
WO2012055543A1 (en) * 2010-10-26 2012-05-03 Technische Universität München Use of a two-dimensional analytical signal in sonography

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379671A (en) * 2021-02-23 2021-09-10 华北电力大学 Partial discharge diagnosis system and diagnosis method for switch equipment
CN117593746A (en) * 2024-01-18 2024-02-23 武汉互创联合科技有限公司 Cell division balance evaluation system and device based on target detection
CN117593746B (en) * 2024-01-18 2024-04-19 武汉互创联合科技有限公司 Cell division balance evaluation system and device based on target detection

Also Published As

Publication number Publication date
CN103793709A (en) 2014-05-14

Similar Documents

Publication Publication Date Title
CN108961208B (en) System and method for segmenting and counting aggregated leukocytes
WO2014066231A1 (en) Cell identification method and device, and urine analyzer
Hore et al. Finding contours of hippocampus brain cell using microscopic image analysis
Cosatto et al. Grading nuclear pleomorphism on histological micrographs
US8077958B2 (en) Computer-aided pathological diagnosis system
Nguyen et al. Prostate cancer detection: Fusion of cytological and textural features
JP2023030033A (en) Method for storing and reading out digital pathology analysis result
JP2013524361A (en) How to segment objects in an image
Wan et al. Wavelet-based statistical features for distinguishing mitotic and non-mitotic cells in breast cancer histopathology
US11959848B2 (en) Method of storing and retrieving digital pathology analysis results
Chen et al. Feasibility study on automated recognition of allergenic pollen: grass, birch and mugwort
Kowal et al. The feature selection problem in computer–assisted cytology
Akakin et al. Automated detection of cells from immunohistochemically-stained tissues: Application to Ki-67 nuclei staining
Akbar et al. Tumor localization in tissue microarrays using rotation invariant superpixel pyramids
Oprisescu et al. Automatic pap smear nuclei detection using mean-shift and region growing
Mani et al. Design of a novel shape signature by farthest point angle for object recognition
Metzler et al. Scale-independent shape analysis for quantitative cytology using mathematical morphology
Rege et al. Automatic leukemia identification system using otsu image segmentation and mser approach for microscopic smear image database
CN109697450B (en) Cell sorting method
Zhang et al. Cascaded-Automatic Segmentation for Schistosoma japonicum eggs in images of fecal samples
Rezaeilouyeh et al. Prostate cancer detection and gleason grading of histological images using shearlet transform
Yap et al. Automated image based prominent nucleoli detection
Safa'a et al. Histopathological prostate tissue glands segmentation for automated diagnosis
Guan et al. Nuclei enhancement and segmentation in color cervical smear images
Qian et al. Coarse-to-fine particle segmentation in microscopic urinary images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13848938

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13848938

Country of ref document: EP

Kind code of ref document: A1