US20090132594A1 - Data classification by kernel density shape interpolation of clusters - Google Patents

Data classification by kernel density shape interpolation of clusters Download PDF

Info

Publication number
US20090132594A1
US20090132594A1 US12/142,949 US14294908A US2009132594A1 US 20090132594 A1 US20090132594 A1 US 20090132594A1 US 14294908 A US14294908 A US 14294908A US 2009132594 A1 US2009132594 A1 US 2009132594A1
Authority
US
United States
Prior art keywords
cluster
estimate value
clusters
density estimate
shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/142,949
Other versions
US7542953B1 (en
Inventor
Tanveer Syeda-Mahmood
Peter J. Haas
John M. Lake
Guy M. Lohman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAP SE
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/142,949 priority Critical patent/US7542953B1/en
Priority to US12/164,532 priority patent/US7542954B1/en
Publication of US20090132594A1 publication Critical patent/US20090132594A1/en
Application granted granted Critical
Publication of US7542953B1 publication Critical patent/US7542953B1/en
Assigned to SAP AG reassignment SAP AG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERNATIONAL BUSINESS MACHINES CORPORATION
Assigned to SAP SE reassignment SAP SE CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SAP AG
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • Exemplary embodiments of the present invention relate to data classification, and more particularly, to shape interpolation of clustered data.
  • Data mining involves sorting through large amounts of data and extracting relevant predictive information.
  • data mining is increasingly being used in the sciences to extract information from the enormous datasets that are generated by modern experimental and observational methods.
  • Data mining can be used to identify trends within data that go beyond simple analysis through the use of sophisticated algorithms.
  • Clustering is the unsupervised categorization of objects into different groups, or more precisely, the organizing of a collection of patterns (usually represented as a vector of measurements, or a point in a multidimensional space) into clusters based on similarity.
  • a cluster is a collection of objects that are “similar” between them and “dissimilar” to the objects belonging to other clusters.
  • the goal of clustering is to determine an intrinsic grouping, or structure, in a set of unlabeled data. Clustering can be used to perform statistical data analysis in many fields, including machine learning, data mining, document retrieval, pattern recognition, medical imaging and other image analysis, and bioinformatics.
  • Classification is a statistical procedure in which individual items are placed into groups based on quantitative information on one or more traits inherent in the items and based on a training set of previously labeled (or pre-classified) patterns. As with clustering, a dataset is divided into groups based upon proximity such that the members of each group are as “close” as possible to one another, and different groups are as “far” as possible from one another, where distance is measured with respect to specific trait(s) that are being analyzed.
  • clustering a collection of labeled patterns is provided, and the problem is to label a newly encountered, yet unlabeled, pattern.
  • the given training patterns are used to learn the descriptions of classes, which in turn are used to label a new pattern.
  • clustering the problem is to group a given collection of unlabeled patterns into meaningful clusters. In a sense, clusters can be seen as labeled patterns that are obtained solely from the data. Therefore, classification often succeeds clustering, although classification may also be performed without explicit clustering (for example, Support Vector Machine classification, described below).
  • new data is typically classified by projecting the data into the multidimensional space of clusters and classifying the new data point based on proximity, that is, distance, to the nearest cluster centroid.
  • the centroid of cluster having a finite set of points can be computed as the arithmetic mean of each coordinate of the points.
  • Support Vector Machine classification when classifying a new data point based on proximity, the distance is taken to the nearest data points coming from the clusters (even though there is no explicit representation of the cluster) called support vectors.
  • Each new data point is represented by a p-dimensional input vector (a list of p numbers) that is mapped to a higher dimensional space where a maximal separating hyperplane is constructed.
  • Each of these data points belongs to only one of two classes. Two parallel hyperplanes are constructed on each side of the hyperplane that separates the data.
  • SVM aims to separate the classes with a “p minus 1”-dimensional hyperplane. To achieve maximum separation between the two classes, a separating hyperplane is selected that maximizes the distance between the two parallel hyperplanes. That is, the nearest distance between a point in one separated hyperplane and a point in the other separated hyperplane is maximized.
  • fuzzy clustering data elements can belong to more than one cluster, and cluster membership is based on proximity test to each cluster. Associated with each element is a set of membership levels that indicate the strength of the association between that data element and the particular clusters of which it is a member. The process of fuzzy clustering involves assigning these membership levels and then using them to assign data elements to one or more clusters. Thus, points on the edge of a cluster may be in the cluster to a lesser degree than points in the center of cluster.
  • the classification is based on the likelihood of the data point coming from any of the clusters based on the sharing of attribute values.
  • observations about an item are mapped to conclusions about its target cluster.
  • leaves represent classifications and branches represent conjunctions of features that lead to those classifications.
  • FIG. 1 illustrating an exemplary clustering of a dataset, demonstrates this problem.
  • the points along the direction of the cluster indicated by W should be more likely to be classified as belonging to this cluster than the set of points indicated by X that are the same distance from the centroid as the points indicated by W.
  • Points lateral to the cluster should be less likely to belong to the cluster than the points at the top edge, even when they have the same proximity to the centroid or support vectors of this cluster.
  • a data processing system that comprises a processor, a random access memory for storing data and programs for execution by the processor, and computer readable instructions stored in the random access memory for execution by the processor to perform a method for obtaining a shape interpolated representation of shapes of one or more clusters in an image of a dataset that has been clustered.
  • the method comprises generating a density estimate value of each grid point of a set of grid points sampled from the image at a specified resolution for each cluster in the image using a kernel density function; evaluating the density estimate value of each grid point for each cluster to identify a maximum density estimate value of each grid point and a cluster associated with the maximum density estimate value of each grid point; and adding each grid point for which the maximum density estimate value exceeds a specified threshold to the cluster associated with the maximum density estimate value for the grid point to form a shape interpolated representation of the one or more clusters.
  • FIG. 1 is a graph illustrating an exemplary clustering of a dataset.
  • FIG. 2 is a flow diagram illustrating an exemplary embodiment of a shape interpolation process in accordance with the present invention.
  • FIGS. 3 a - 3 c are graphs illustrating stages of an exemplary embodiment of a shape interpolation process performed in accordance with the present invention.
  • FIG. 4 is a block diagram illustrating an exemplary hardware configuration or a computer system within which exemplary embodiments of the present invention can be implemented.
  • Exemplary embodiments of the present invention described herein can be implemented to perform data classification using shape interpolation of clusters.
  • Shape interpolation is the process of transforming one object continuously into another. Modeling of cluster shapes has thus far been limited to representations either as a collection of isolated points within the same cluster label or through global parametric models such as mixtures of Gaussians. Cluster structure, however, cannot adequately be described as collection of isolated points, and the parametric models typically operate to smooth the arbitrary distributions that characterize clusters by approximately fitting the distributions to a geometric shape having pre-determined boundaries and therefore also cannot accurately represent the perceptible regions of the shape of a cluster. All parametric densities are unimodal, that is, they have a single local maximum, while many practical problems involve multimodal densities. Furthermore, traditional surface interpolation methods used in computer vision are not applicable to considerations of higher-dimensional point distributions.
  • Exemplary embodiments described herein can be implemented to interpolate cluster shapes in a manner that is able to preserve the overall perception of the shapes given by the data points in a multidimensional feature space.
  • the given sample points already present in the cluster are treated as anchor points and a probability density function, which is a function that represents a probability distribution in terms of integrals, is hypothesized from observed data.
  • exemplary embodiments can be implemented to represent cluster shapes using a model that is based on density estimation. Density estimation involves the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is viewed as the density according to which a large population is distributed, and the data are usually thought of as a random sample from that population.
  • kernel density estimation is a method of estimating the probability density function of a random variable.
  • Kernel density estimation is a nonparametric technique for density estimation in which a known density function, the kernel, is averaged across the observed data points to create a smooth approximation. Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known. Although it is possible for less smooth density estimators such as the histogram density estimator to be made to be asymptotically consistent, other density estimators are often either discontinuous or converge at slower rates than the kernel density estimator.
  • the kernel density estimator can be thought of as placing small “bumps” at each observation determined by the kernel function.
  • the estimator consists of a “sum of bumps” and creates a smoother, finer approximation or the regions of cluster shapes that does not depend on end points or bounded, pre-determined shapes.
  • FIG. 2 illustrates a flow diagram of a process, indicated generally at 100 , for performing shape interpolation of clusters using a kernel density function in accordance with an exemplary embodiment of the present invention.
  • the initial clustering of a dataset first performed at block 110 using any clustering method, including, for example, any suitable partitional (e.g., k-means, k-mediod, nearest neighbor), overlapping (e.g., fuzzy c-means), hierarchical (e.g., agglomerative, divisive), probabilistic (e.g., Enhanced Model-based methods such as mixture of Gaussians), graph-theoretic (e.g., spectral clustering variants), and scale-space approaches.
  • any suitable partitional e.g., k-means, k-mediod, nearest neighbor
  • overlapping e.g., fuzzy c-means
  • hierarchical e.g., agglomerative, divisive
  • probabilistic e.g.
  • two stages of clustering can be performed.
  • an unsupervised, non-parametric clustering method such as, for example, perceptual clustering
  • perceptual clustering can be performed on the initial dataset, to determine the number of cluster shapes.
  • the data points in each separate cluster shape are clustered a second time using a supervised, partitional clustering method such as, for example, k-means or k-mediod algorithms, to partition each cluster shape into a desired number of smaller cluster regions to provide a dense representation of the clusters.
  • a smooth interpolation of the shapes of the clusters is obtained at block 120 by using a kernel density function that will be described in greater detail below.
  • a kernel density function that will be described in greater detail below.
  • the contribution of each data point can be smoothed out over a local neighborhood of that data point.
  • the contribution of data point X i to the estimate at some point X depends on how apart X i and X are. The extent of this contribution is dependent upon the shape of the kernel function adopted and the bandwidth, which determines the range of the local estimation neighborhood for each data point.
  • the equation for determining the estimated density at any point x is provided by
  • the kernel function K can be chosen to be a smooth unimodal function such as a Gaussian kernel. It should be noted that choosing the Gaussian as the kernel function is different from fitting the distribution to a mixture of Gaussian model. In the present situation, the Gaussian is only used as a function that weights the data points. In exemplary embodiments, a multivariate Gaussian could be used. In the present exemplary embodiment, a simpler approximation in terms of a product of one-dimensional kernels is used. Thus, the shape of a cluster c consisting of sample points ⁇ X 1 , X 2 , . . . X n ⁇ at any arbitrary point X in the M-dimensional space is given by the approximation equation
  • any suitable choice of bandwidth that is not too small or too large for performing kernel density estimation can be used.
  • the bandwidth estimation formula that is used is one that is typically adopted for most practical applications and can be expressed by the following equation:
  • h j 1.06 ⁇ ⁇ min ( var ⁇ ( f j ) , iqr ⁇ ( f j ) 1.34 ) ⁇ n - 1 5 ,
  • iqr(f j ) is the inter-quartile range of f j and n is the number of samples in the cluster. This bandwidth may generally produce a less smooth but more accurate density estimate.
  • the kernel density interpolation of the above approximation equation is applied by sampling the image size on a neighborhood of a specified image resolution for each selected clustering level.
  • the multidimensional image can be sampled with a fine grid having as much resolution as desired for the interpolation.
  • the image resolution could be specified as 256 ⁇ 256, 128 ⁇ 128, 64 ⁇ 64, etc. in exemplary embodiments.
  • the sampling resolution is selected as 256 ⁇ 256 so that a dense representation of shape will be obtained. This can eliminate small, noisy samples that are in single connected components, as the bandwidth will reduce to zero when applying the kernel density approximation equation for such samples.
  • the kernel density interpolation performed at block 120 can be applied to interpolate the shape of each smaller cluster region.
  • a close fit estimation of the cluster shapes that resulted from the first clustering stage can then be obtained by uniting the interpolated shapes of the second-stage smaller cluster regions for each first-stage cluster shape.
  • classification can be performed based upon more accurate approximations of regions of cluster shapes, rather simply based on proximity to a centroid or according to the boundary points of a pre-determined shape.
  • the kernel density estimate is evaluated from each cluster at each grid point using the above equation for determining the estimated density, and the maximum value of the estimate for each grid point is retained as an estimate along with the associated cluster label for the grid point.
  • the grid point is classified as belonging to the associated cluster and therefore added to that cluster.
  • the new shape of the cluster is formed as the set of grid points added to that cluster at block 140 , along with the sample points of the cluster that were previously isolated at block 110 .
  • FIGS. 3 a - 3 c are graphs illustrating a shape interpolation performed in accordance with exemplary process 100 on an exemplary image of a set of data upon which clustering has been performed.
  • FIG. 3 a shows the original data.
  • FIG. 3 b illustrates the regions that were produced by interpolating the clusters of FIG. 3 a using kernel density estimation.
  • the interpolated shapes in FIG. 3 b are representative of the overall cluster shapes in FIG.
  • FIG. 3 a illustrates the final result of clustering after any needed noise removal and cluster merging is performed.
  • shape interpolation using kernel density estimation can be carried out dynamically during classification to find the nearest cluster.
  • a new sample can be assigned to the cluster with the highest kernel density estimate.
  • the exemplary shape interpolation processes described above can be implemented to classify new data points by testing membership in a shape interpolated from a cluster of data points using kernel density estimation. Kernel density estimation as described herein utilizes a nonparametric function to provide a good dense interpolation of shape around a cluster.
  • the details of the exemplary shape interpolation process illustrated in FIG. 2 can be summarized as follows:
  • the kernel function K is can be chosen to be a smooth unimodal function.
  • noise and region merging inconsistencies can also be removed in exemplary embodiments.
  • exemplary embodiments of present invention can be implemented in software, firmware, hardware, or some combination thereof, and may be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system—or other apparatus adapted for carrying out the methods and/or functions described herein—is suitable.
  • a typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • Exemplary embodiments of the present invention can also be embedded in a computer program product, which comprises features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods.
  • Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or reproduction in a different material form.
  • one or more aspects of exemplary embodiments of the present invention can be included in an article of manufacture (for example, one or more computer program products) having, for instance, computer usable media.
  • the media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention.
  • the article of manufacture can be included as a part of a computer system or sold separately.
  • at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the exemplary embodiments of the present invention described above can be provided.
  • FIG. 4 illustrates an exemplary computer system 10 upon which exemplary embodiments of the present invention can be implemented.
  • a processor or CPU 12 receives data and instructions for operating upon from on-board cache memory or further cache memory 18 , possibly through the mediation of a cache controller 20 , which can in turn receives such data from system read/write memory (“RAM”) 22 through a RAM controller 24 , or from various peripheral devices through a system bus 26 .
  • RAM system read/write memory
  • RAM controller 24 or from various peripheral devices through a system bus 26 .
  • the data and instruction contents of RAM 22 will ordinarily have been loaded from peripheral devices such as a system disk 27 .
  • Alternative sources include communications interface 28 , which can receive instructions and data from other computer systems.
  • the above-described program or modules implementing exemplary embodiments of the present invention can work on processor 12 and the like to perform shape interpolation.
  • the program or modules implementing exemplary embodiments may be stored in an external storage medium.
  • an optical recording medium such as a DVD and a PD
  • a magneto-optical recording medium such as a MD
  • a tape medium such as a tape
  • a semiconductor memory such as an IC card, and the like
  • the program may be provided to computer system 10 through the network by using, as the recording medium, a storage device such as a hard disk or a RAM, which is provided in a server system connected to a dedicated communication network or the Internet.

Abstract

A data processing system is provided that comprises a processor, a random access memory for storing data and programs for execution by the processor, and computer readable instructions stored in the random access memory for execution by the processor to perform a method for obtaining a shape interpolated representation of shapes of clusters in an image of a clustered dataset. The method comprises generating a density estimate value of each grid point of a set of grid points sampled from the image at a specified resolution for each cluster using a kernel density function; evaluating the density estimate value of each grid point for each cluster to identify a maximum density estimate value of each grid point and a cluster associated with the maximum density estimate value; and adding each grid point for which the maximum density estimate value exceeds a specified threshold to the associated cluster to form a shape interpolated representation.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 11/940,739, filed Nov. 15, 2007, the disclosure of which is incorporated by reference herein in its entirety.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • Exemplary embodiments of the present invention relate to data classification, and more particularly, to shape interpolation of clustered data.
  • 2. Description of Background
  • Data mining involves sorting through large amounts of data and extracting relevant predictive information. Traditionally used by business intelligence organizations and financial analysts, data mining is increasingly being used in the sciences to extract information from the enormous datasets that are generated by modern experimental and observational methods. Data mining can be used to identify trends within data that go beyond simple analysis through the use of sophisticated algorithms.
  • Many data mining applications depend on the partitioning data elements into related subsets. Therefore, classification and clustering are important tasks in data mining. Clustering is the unsupervised categorization of objects into different groups, or more precisely, the organizing of a collection of patterns (usually represented as a vector of measurements, or a point in a multidimensional space) into clusters based on similarity. A cluster is a collection of objects that are “similar” between them and “dissimilar” to the objects belonging to other clusters. The goal of clustering is to determine an intrinsic grouping, or structure, in a set of unlabeled data. Clustering can be used to perform statistical data analysis in many fields, including machine learning, data mining, document retrieval, pattern recognition, medical imaging and other image analysis, and bioinformatics.
  • Classification is a statistical procedure in which individual items are placed into groups based on quantitative information on one or more traits inherent in the items and based on a training set of previously labeled (or pre-classified) patterns. As with clustering, a dataset is divided into groups based upon proximity such that the members of each group are as “close” as possible to one another, and different groups are as “far” as possible from one another, where distance is measured with respect to specific trait(s) that are being analyzed.
  • An important difference should be noted when comparing clustering and classification. In classification, a collection of labeled patterns is provided, and the problem is to label a newly encountered, yet unlabeled, pattern. Typically, the given training patterns are used to learn the descriptions of classes, which in turn are used to label a new pattern. In the case of clustering, the problem is to group a given collection of unlabeled patterns into meaningful clusters. In a sense, clusters can be seen as labeled patterns that are obtained solely from the data. Therefore, classification often succeeds clustering, although classification may also be performed without explicit clustering (for example, Support Vector Machine classification, described below). In situations in which classification is performed once the clusters have been identified, new data is typically classified by projecting the data into the multidimensional space of clusters and classifying the new data point based on proximity, that is, distance, to the nearest cluster centroid. The centroid of cluster having a finite set of points can be computed as the arithmetic mean of each coordinate of the points.
  • The variety of techniques for representing data, measuring proximity between data elements, and grouping data elements has produced a rich assortment of classification and clustering methods.
  • In Support Vector Machine classification (SVM), when classifying a new data point based on proximity, the distance is taken to the nearest data points coming from the clusters (even though there is no explicit representation of the cluster) called support vectors. Each new data point is represented by a p-dimensional input vector (a list of p numbers) that is mapped to a higher dimensional space where a maximal separating hyperplane is constructed. Each of these data points belongs to only one of two classes. Two parallel hyperplanes are constructed on each side of the hyperplane that separates the data. SVM aims to separate the classes with a “p minus 1”-dimensional hyperplane. To achieve maximum separation between the two classes, a separating hyperplane is selected that maximizes the distance between the two parallel hyperplanes. That is, the nearest distance between a point in one separated hyperplane and a point in the other separated hyperplane is maximized.
  • In fuzzy clustering, data elements can belong to more than one cluster, and cluster membership is based on proximity test to each cluster. Associated with each element is a set of membership levels that indicate the strength of the association between that data element and the particular clusters of which it is a member. The process of fuzzy clustering involves assigning these membership levels and then using them to assign data elements to one or more clusters. Thus, points on the edge of a cluster may be in the cluster to a lesser degree than points in the center of cluster.
  • In categorical classification methods based on decision tree variants, the classification is based on the likelihood of the data point coming from any of the clusters based on the sharing of attribute values. Using a decision tree model, observations about an item are mapped to conclusions about its target cluster. In these tree structures, leaves represent classifications and branches represent conjunctions of features that lead to those classifications.
  • Classification using proximity to either centroids of clusters or support vectors is generally inadequate to properly classify data points. To provide for more accurate classification, the shape of the cluster should be taken into account. FIG. 1, illustrating an exemplary clustering of a dataset, demonstrates this problem. The points along the direction of the cluster indicated by W should be more likely to be classified as belonging to this cluster than the set of points indicated by X that are the same distance from the centroid as the points indicated by W. Points lateral to the cluster should be less likely to belong to the cluster than the points at the top edge, even when they have the same proximity to the centroid or support vectors of this cluster.
  • SUMMARY OF THE INVENTION
  • The shortcomings of the prior art can be overcome and additional advantages can be provided through exemplary embodiments of the present invention that are related to a data processing system that comprises a processor, a random access memory for storing data and programs for execution by the processor, and computer readable instructions stored in the random access memory for execution by the processor to perform a method for obtaining a shape interpolated representation of shapes of one or more clusters in an image of a dataset that has been clustered. The method comprises generating a density estimate value of each grid point of a set of grid points sampled from the image at a specified resolution for each cluster in the image using a kernel density function; evaluating the density estimate value of each grid point for each cluster to identify a maximum density estimate value of each grid point and a cluster associated with the maximum density estimate value of each grid point; and adding each grid point for which the maximum density estimate value exceeds a specified threshold to the cluster associated with the maximum density estimate value for the grid point to form a shape interpolated representation of the one or more clusters.
  • The shortcomings of the prior art can also be overcome and additional advantages can also be provided through exemplary embodiments of the present invention that are related to computer program products and methods corresponding to the above-summarized method are also described and claimed herein.
  • Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with advantages and features, refer to the description and to the drawings.
  • TECHNICAL EFFECTS
  • As a result of the summarized invention, technically we have achieved a solution that can be implemented to interpolate cluster shapes by utilizing kernel density estimation to create a smoother approximation in a manner that is able to preserve the overall perception of the shapes given by the data points in a multidimensional feature space. Exemplary embodiments can be implemented to perform precise classification by more accurately identifying outlier data points.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description of exemplary embodiments of the present invention taken in conjunction with the accompanying drawings in which:
  • FIG. 1 is a graph illustrating an exemplary clustering of a dataset.
  • FIG. 2 is a flow diagram illustrating an exemplary embodiment of a shape interpolation process in accordance with the present invention.
  • FIGS. 3 a-3 c are graphs illustrating stages of an exemplary embodiment of a shape interpolation process performed in accordance with the present invention.
  • FIG. 4 is a block diagram illustrating an exemplary hardware configuration or a computer system within which exemplary embodiments of the present invention can be implemented.
  • The detailed description explains exemplary embodiments of the present invention, together with advantages and features, by way of example with reference to the drawings. The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified. All of these variations are considered a part of the claimed invention.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the description of exemplary embodiments in conjunction with the drawings. It is of course to be understood that the embodiments described herein are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed in relation to the exemplary embodiments described herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriate form. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the invention.
  • Exemplary embodiments of the present invention described herein can be implemented to perform data classification using shape interpolation of clusters. Shape interpolation is the process of transforming one object continuously into another. Modeling of cluster shapes has thus far been limited to representations either as a collection of isolated points within the same cluster label or through global parametric models such as mixtures of Gaussians. Cluster structure, however, cannot adequately be described as collection of isolated points, and the parametric models typically operate to smooth the arbitrary distributions that characterize clusters by approximately fitting the distributions to a geometric shape having pre-determined boundaries and therefore also cannot accurately represent the perceptible regions of the shape of a cluster. All parametric densities are unimodal, that is, they have a single local maximum, while many practical problems involve multimodal densities. Furthermore, traditional surface interpolation methods used in computer vision are not applicable to considerations of higher-dimensional point distributions.
  • Exemplary embodiments described herein can be implemented to interpolate cluster shapes in a manner that is able to preserve the overall perception of the shapes given by the data points in a multidimensional feature space. In exemplary embodiments of the present invention, to generate a continuous manifold characterizing a cluster, the given sample points already present in the cluster are treated as anchor points and a probability density function, which is a function that represents a probability distribution in terms of integrals, is hypothesized from observed data. More specifically, exemplary embodiments can be implemented to represent cluster shapes using a model that is based on density estimation. Density estimation involves the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is viewed as the density according to which a large population is distributed, and the data are usually thought of as a random sample from that population.
  • Because of the sparseness of multidimensional datasets in comparison to feature space dimensions, it can be useful for exemplary embodiments to first obtain a clustering of the dataset that provides dense representation of the shapes of the clusters in which the clusters are viewed as regions of the pattern space in which the patterns are dense, separated by regions of low pattern density. Clusters can then be identified by searching for regions of high density, called modes, in the pattern space. The close fit provided by a dense representation of the cluster shapes would help in later classification of new data points, as the classification would be based on membership within multidimensional manifolds rather than distance alone.
  • Even more specifically, exemplary embodiments as described herein utilize kernel density estimation, which is a method of estimating the probability density function of a random variable. Kernel density estimation is a nonparametric technique for density estimation in which a known density function, the kernel, is averaged across the observed data points to create a smooth approximation. Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known. Although it is possible for less smooth density estimators such as the histogram density estimator to be made to be asymptotically consistent, other density estimators are often either discontinuous or converge at slower rates than the kernel density estimator. Rather than grouping observations together in bins, the kernel density estimator can be thought of as placing small “bumps” at each observation determined by the kernel function. As a result, the estimator consists of a “sum of bumps” and creates a smoother, finer approximation or the regions of cluster shapes that does not depend on end points or bounded, pre-determined shapes.
  • FIG. 2 illustrates a flow diagram of a process, indicated generally at 100, for performing shape interpolation of clusters using a kernel density function in accordance with an exemplary embodiment of the present invention. Because the kernel density interpolation will be applied for purposes of representing cluster shapes, the initial clustering of a dataset first performed at block 110 using any clustering method, including, for example, any suitable partitional (e.g., k-means, k-mediod, nearest neighbor), overlapping (e.g., fuzzy c-means), hierarchical (e.g., agglomerative, divisive), probabilistic (e.g., Enhanced Model-based methods such as mixture of Gaussians), graph-theoretic (e.g., spectral clustering variants), and scale-space approaches.
  • In exemplary embodiments, to obtain a dense representation of the shapes of the clusters at block 110, two stages of clustering can be performed. In the first stage, an unsupervised, non-parametric clustering method, such as, for example, perceptual clustering, can be performed on the initial dataset, to determine the number of cluster shapes. In the second stage, the data points in each separate cluster shape are clustered a second time using a supervised, partitional clustering method such as, for example, k-means or k-mediod algorithms, to partition each cluster shape into a desired number of smaller cluster regions to provide a dense representation of the clusters.
  • After clustering is performed in exemplary process 100, a smooth interpolation of the shapes of the clusters is obtained at block 120 by using a kernel density function that will be described in greater detail below. First, however, some terminology for the model used in the present exemplary embodiment will be outlined.
  • In the model of the present exemplary embodiment, given n sample points {X1, X2, . . . Xn} belonging to a cluster c, the contribution of each data point can be smoothed out over a local neighborhood of that data point. The contribution of data point Xi to the estimate at some point X depends on how apart Xi and X are. The extent of this contribution is dependent upon the shape of the kernel function adopted and the bandwidth, which determines the range of the local estimation neighborhood for each data point. In the present exemplary embodiment, denoting the kernel function as K and its bandwidth by h, the equation for determining the estimated density at any point x is provided by
  • P ^ ( X ) = 1 n i = 1 n K ( X - X j h ) ,
  • where ∫K(t)dt=1 to ensure that the estimate P(x) integrates to 1.
  • In exemplary embodiments, the kernel function K can be chosen to be a smooth unimodal function such as a Gaussian kernel. It should be noted that choosing the Gaussian as the kernel function is different from fitting the distribution to a mixture of Gaussian model. In the present situation, the Gaussian is only used as a function that weights the data points. In exemplary embodiments, a multivariate Gaussian could be used. In the present exemplary embodiment, a simpler approximation in terms of a product of one-dimensional kernels is used. Thus, the shape of a cluster c consisting of sample points {X1, X2, . . . Xn} at any arbitrary point X in the M-dimensional space is given by the approximation equation
  • P ^ ( X ) = 1 2 π n i = 1 n j 1 h j e - ( f ji - f _ ji ) 2 2 hj 2 ,
  • where (f1i, f2i, . . . fMi) are the values along the feature dimensions and ( f 1i, f 2i, . . . f Mi) are the sample means along the respective dimensions.
  • In exemplary embodiments, any suitable choice of bandwidth that is not too small or too large for performing kernel density estimation can be used. In the present exemplary embodiment, the bandwidth estimation formula that is used is one that is typically adopted for most practical applications and can be expressed by the following equation:
  • h j = 1.06 min ( var ( f j ) , iqr ( f j ) 1.34 ) n - 1 5 ,
  • where fj=(fji, fj2, . . . fjn) are features assembled from dimension j for all samples in the cluster. Here, iqr(fj) is the inter-quartile range of fj and n is the number of samples in the cluster. This bandwidth may generally produce a less smooth but more accurate density estimate.
  • At block 120 of exemplary process 100, the kernel density interpolation of the above approximation equation is applied by sampling the image size on a neighborhood of a specified image resolution for each selected clustering level. To interpolate the shape of clusters, the multidimensional image can be sampled with a fine grid having as much resolution as desired for the interpolation. For example, the image resolution could be specified as 256×256, 128×128, 64×64, etc. in exemplary embodiments. In the present exemplary embodiment, the sampling resolution is selected as 256×256 so that a dense representation of shape will be obtained. This can eliminate small, noisy samples that are in single connected components, as the bandwidth will reduce to zero when applying the kernel density approximation equation for such samples.
  • In exemplary embodiments in which a two-stage clustering is performed at step 110 to generate a number of cluster shapes and a desired number of smaller cluster regions for each cluster shape, the kernel density interpolation performed at block 120 can be applied to interpolate the shape of each smaller cluster region. A close fit estimation of the cluster shapes that resulted from the first clustering stage can then be obtained by uniting the interpolated shapes of the second-stage smaller cluster regions for each first-stage cluster shape. As a result, classification can be performed based upon more accurate approximations of regions of cluster shapes, rather simply based on proximity to a centroid or according to the boundary points of a pre-determined shape.
  • At block 130, after performing the kernel density interpolation, the kernel density estimate is evaluated from each cluster at each grid point using the above equation for determining the estimated density, and the maximum value of the estimate for each grid point is retained as an estimate along with the associated cluster label for the grid point. At block 140, for each grid point, if the maximum value of the density estimate for that grid point is above a chosen threshold, the grid point is classified as belonging to the associated cluster and therefore added to that cluster. At block 150, for each cluster, the new shape of the cluster is formed as the set of grid points added to that cluster at block 140, along with the sample points of the cluster that were previously isolated at block 110.
  • As a result of the exemplary shape interpolation process described above, a dense representation of clusters can be obtained. The resulting shape of each cluster will resemble the original cluster shape and therefore can be more indicative of a classification region around the cluster than the use of support vectors alone. FIGS. 3 a-3 c are graphs illustrating a shape interpolation performed in accordance with exemplary process 100 on an exemplary image of a set of data upon which clustering has been performed. FIG. 3 a shows the original data. FIG. 3 b illustrates the regions that were produced by interpolating the clusters of FIG. 3 a using kernel density estimation. As can be seen, the interpolated shapes in FIG. 3 b are representative of the overall cluster shapes in FIG. 3 a and define ‘halo” regions around the clusters. The data points that fall within these regions would be classified as belonging to the respective clusters. The perceptible shapes of the clusters are preserved in the interpolation. As a result, the spatial adjacency of the regions indicated by arrow Y in FIG. 3 b, as well as spatial disjointedness of the regions indicated by arrow Z, can both be easily spotted. In exemplary embodiments, the former pairs of regions and can be merged and the latter pairs of regions can be disconnected, and single sample clusters having no kernel density interpolation to form the region that were formed due to noise can be eliminated. FIG. 3 c illustrates the final result of clustering after any needed noise removal and cluster merging is performed.
  • Although the exemplary embodiments described thus far have involved performing an explicit computation, in other exemplary embodiments, shape interpolation using kernel density estimation can be carried out dynamically during classification to find the nearest cluster. As a result, instead of using the centroid of the cluster as a prototypical member for computing the nearest distance, a new sample can be assigned to the cluster with the highest kernel density estimate.
  • The exemplary shape interpolation processes described above can be implemented to classify new data points by testing membership in a shape interpolated from a cluster of data points using kernel density estimation. Kernel density estimation as described herein utilizes a nonparametric function to provide a good dense interpolation of shape around a cluster. The details of the exemplary shape interpolation process illustrated in FIG. 2 can be summarized as follows:
  • 1. Perform clustering of the data points using any clustering algorithm.
  • 2. Let there be n sample points {X1, X2, . . . Xn} belonging to a cluster c.
  • 3. Perform a dense shape interpolation using a kernel density function. That is, at a point X in the multidimensional space surrounding c, the contribution of data point Xi to the estimate at some point X depends on how apart Xi and X are. The extent of this contribution is dependent upon the shape of the kernel function adopted and the bandwidth in exemplary embodiments. Denoting the kernel function as K and its bandwidth by h, the estimated density at any point x is
  • P ^ ( X ) = 1 n i = 1 n K ( X - X i h ) ,
  • where ∫K(t)dt=1 to ensure that the estimate P(x) integrates to 1. In exemplary embodiments, the kernel function K is can be chosen to be a smooth unimodal function.
  • 4. Given any new point X, the class that X belongs is the one for which the value of the approximation equation
  • P ^ ( X ) = 1 n i = 1 n K ( X - X i h ) ,
  • is the maximum.
  • By approximating the shape of clusters at a chosen level through a dense kernel density function-based interpolation of sparse datasets, noise and region merging inconsistencies can also be removed in exemplary embodiments.
  • The capabilities of exemplary embodiments of present invention described above can be implemented in software, firmware, hardware, or some combination thereof, and may be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system—or other apparatus adapted for carrying out the methods and/or functions described herein—is suitable. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. Exemplary embodiments of the present invention can also be embedded in a computer program product, which comprises features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods.
  • Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or reproduction in a different material form.
  • Therefore, one or more aspects of exemplary embodiments of the present invention can be included in an article of manufacture (for example, one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately. Furthermore, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the exemplary embodiments of the present invention described above can be provided.
  • For instance, exemplary embodiments of the present invention can be implemented within the exemplary embodiment of a hardware configuration provided for a computer system in FIG. 4. FIG. 4 illustrates an exemplary computer system 10 upon which exemplary embodiments of the present invention can be implemented. A processor or CPU 12 receives data and instructions for operating upon from on-board cache memory or further cache memory 18, possibly through the mediation of a cache controller 20, which can in turn receives such data from system read/write memory (“RAM”) 22 through a RAM controller 24, or from various peripheral devices through a system bus 26. The data and instruction contents of RAM 22 will ordinarily have been loaded from peripheral devices such as a system disk 27. Alternative sources include communications interface 28, which can receive instructions and data from other computer systems.
  • The above-described program or modules implementing exemplary embodiments of the present invention can work on processor 12 and the like to perform shape interpolation. The program or modules implementing exemplary embodiments may be stored in an external storage medium. In addition to system disk 27, an optical recording medium such as a DVD and a PD, a magneto-optical recording medium such as a MD, a tape medium, a semiconductor memory such as an IC card, and the like may be used as the storage medium. Moreover, the program may be provided to computer system 10 through the network by using, as the recording medium, a storage device such as a hard disk or a RAM, which is provided in a server system connected to a dedicated communication network or the Internet.
  • While exemplary embodiments of the present invention have been described, it will be understood that those skilled in the art, both now and in the future, may make various modifications without departing from the spirit and the scope of the present invention as set forth in the following claims. These following claims should be construed to maintain the proper protection for the present invention.

Claims (5)

1. A data processing system comprising:
a processor;
a random access memory for storing data and programs for execution by the processor; and
computer readable instructions stored in the random access memory for execution by the processor to perform a method for obtaining a shape interpolated representation of shapes of one or more clusters in an image of a dataset that has been clustered, the method comprising:
generating a density estimate value of each grid point of a set of grid points sampled from the image at a specified resolution for each cluster in the image using a kernel density function;
evaluating the density estimate value of each grid point for each cluster to identify a maximum density estimate value of each grid point and a cluster associated with the maximum density estimate value of each grid point; and
adding each grid point for which the maximum density estimate value exceeds a specified threshold to the cluster associated with the maximum density estimate value for the grid point to form a shape interpolated representation of the one or more clusters.
2. The data processing system of claim 1, wherein the dataset has been clustered using a two-stage clustering method, the two-stage clustering method comprising:
clustering the dataset using an unsupervised, non-parametric clustering method to generate a set of cluster shapes each comprising a set of data points of the dataset; and
clustering the data points of each cluster shape of the set of cluster shapes using a supervised, partitional clustering method to partition each cluster shape into a specified number of cluster regions.
3. The data processing system of claim 1, wherein the kernel density function is a Gaussian kernel.
4. The data processing system of claim 1, wherein the method for obtaining a shape interpolated representation of shapes of one or more clusters in an image of a dataset that has been clustered further comprises merging any spatially adjacent clusters in the shape interpolated representation and removing any spatially disjointed clusters in the shape interpolated representation.
5. The data processing system of claim 1, wherein the method for obtaining a shape interpolated representation of shapes of one or more clusters in an image of a dataset that has been clustered further comprises classifying a new data point by generating a density estimate value of the new data point for each cluster in the image using the kernel density function, evaluating the density estimate value of the new data point for each cluster to identify a maximum density estimate value of the new data point and a cluster associated with the maximum density estimate value, and adding the new data point to the cluster associated with the maximum density estimate value in the shape interpolated representation if the maximum density estimate value exceeds a specified threshold to classify the new data point.
US12/142,949 2007-11-15 2008-06-20 Data classification by kernel density shape interpolation of clusters Active US7542953B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/142,949 US7542953B1 (en) 2007-11-15 2008-06-20 Data classification by kernel density shape interpolation of clusters
US12/164,532 US7542954B1 (en) 2007-11-15 2008-06-30 Data classification by kernel density shape interpolation of clusters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/940,739 US7412429B1 (en) 2007-11-15 2007-11-15 Method for data classification by kernel density shape interpolation of clusters
US12/142,949 US7542953B1 (en) 2007-11-15 2008-06-20 Data classification by kernel density shape interpolation of clusters

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/940,739 Continuation US7412429B1 (en) 2007-11-15 2007-11-15 Method for data classification by kernel density shape interpolation of clusters

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/164,532 Continuation US7542954B1 (en) 2007-11-15 2008-06-30 Data classification by kernel density shape interpolation of clusters

Publications (2)

Publication Number Publication Date
US20090132594A1 true US20090132594A1 (en) 2009-05-21
US7542953B1 US7542953B1 (en) 2009-06-02

Family

ID=39678812

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/940,739 Active US7412429B1 (en) 2007-11-15 2007-11-15 Method for data classification by kernel density shape interpolation of clusters
US12/142,949 Active US7542953B1 (en) 2007-11-15 2008-06-20 Data classification by kernel density shape interpolation of clusters
US12/164,532 Active US7542954B1 (en) 2007-11-15 2008-06-30 Data classification by kernel density shape interpolation of clusters

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/940,739 Active US7412429B1 (en) 2007-11-15 2007-11-15 Method for data classification by kernel density shape interpolation of clusters

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/164,532 Active US7542954B1 (en) 2007-11-15 2008-06-30 Data classification by kernel density shape interpolation of clusters

Country Status (1)

Country Link
US (3) US7412429B1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232355A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data using eigenanalysis
US20110129159A1 (en) * 2009-11-30 2011-06-02 Xerox Corporation Content based image selection for automatic photo album generation
US20110200249A1 (en) * 2010-02-17 2011-08-18 Harris Corporation Surface detection in images based on spatial data
CN103729539A (en) * 2012-10-12 2014-04-16 国际商业机器公司 Method and system for detecting and describing visible features on visualization
US20140104309A1 (en) * 2012-10-12 2014-04-17 International Business Machines Corporation Detecting and Describing Visible Features on a Visualization
US20140201339A1 (en) * 2011-05-27 2014-07-17 Telefonaktiebolaget L M Ericsson (Publ) Method of conditioning communication network data relating to a distribution of network entities across a space
CN106339416A (en) * 2016-08-15 2017-01-18 常熟理工学院 Grid-based data clustering method for fast researching density peaks
WO2020113363A1 (en) * 2018-12-03 2020-06-11 Siemens Mobility GmbH Method and apparatus for classifying data
US11210348B2 (en) * 2018-01-13 2021-12-28 Huizhou University Data clustering method and apparatus based on k-nearest neighbor and computer readable storage medium

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7684963B2 (en) * 2005-03-29 2010-03-23 International Business Machines Corporation Systems and methods of data traffic generation via density estimation using SVD
US7623712B2 (en) * 2005-06-09 2009-11-24 Canon Kabushiki Kaisha Image processing method and apparatus
US8280484B2 (en) 2007-12-18 2012-10-02 The Invention Science Fund I, Llc System, devices, and methods for detecting occlusions in a biological subject
US9672471B2 (en) 2007-12-18 2017-06-06 Gearbox Llc Systems, devices, and methods for detecting occlusions in a biological subject including spectral learning
US8296248B2 (en) * 2009-06-30 2012-10-23 Mitsubishi Electric Research Laboratories, Inc. Method for clustering samples with weakly supervised kernel mean shift matrices
US9418145B2 (en) * 2013-02-04 2016-08-16 TextWise Company, LLC Method and system for visualizing documents
DE102013213397A1 (en) * 2013-07-09 2015-01-15 Robert Bosch Gmbh Method and apparatus for providing support point data for a data-based function model
US9842390B2 (en) * 2015-02-06 2017-12-12 International Business Machines Corporation Automatic ground truth generation for medical image collections
CN104715460A (en) * 2015-03-30 2015-06-17 江南大学 Quick image super-resolution reconstruction method based on sparse representation
US10546246B2 (en) 2015-09-18 2020-01-28 International Business Machines Corporation Enhanced kernel representation for processing multimodal data
JP6740611B2 (en) * 2015-12-24 2020-08-19 カシオ計算機株式会社 Information processing apparatus, information processing method, and program
CN106484758B (en) * 2016-08-09 2019-08-06 浙江经济职业技术学院 A kind of real-time stream Density Estimator method based on grid and cluster optimization
US10417530B2 (en) * 2016-09-30 2019-09-17 Cylance Inc. Centroid for improving machine learning classification and info retrieval
US10026014B2 (en) 2016-10-26 2018-07-17 Nxp Usa, Inc. Method and apparatus for data set classification based on generator features
EP4148593A1 (en) * 2017-02-27 2023-03-15 QlikTech International AB Methods and systems for extracting and visualizing patterns in large-scale data sets
US10839256B2 (en) * 2017-04-25 2020-11-17 The Johns Hopkins University Method and apparatus for clustering, analysis and classification of high dimensional data sets
US10229347B2 (en) * 2017-05-14 2019-03-12 International Business Machines Corporation Systems and methods for identifying a target object in an image
CN108828583B (en) * 2018-06-15 2022-06-28 西安电子科技大学 Point trace clustering method based on fuzzy C mean value
US11151483B2 (en) * 2019-05-01 2021-10-19 Cognizant Technology Solutions India Pvt. Ltd System and a method for assessing data for analytics
CN110674120B (en) * 2019-08-09 2024-01-19 国电新能源技术研究院有限公司 Wind farm data cleaning method and device
US11328002B2 (en) * 2020-04-17 2022-05-10 Adobe Inc. Dynamic clustering of sparse data utilizing hash partitions
US11403325B2 (en) * 2020-05-12 2022-08-02 International Business Machines Corporation Clustering items around predefined anchors
CN111626821B (en) * 2020-05-26 2024-03-12 山东大学 Product recommendation method and system for realizing customer classification based on integrated feature selection
CN112163623B (en) * 2020-09-30 2022-03-04 广东工业大学 Fast clustering method based on density subgraph estimation, computer equipment and storage medium
CN112233019B (en) * 2020-10-14 2023-06-30 长沙行深智能科技有限公司 ISP color interpolation method and device based on self-adaptive Gaussian kernel
CN112288704B (en) * 2020-10-26 2021-09-28 中国人民解放军陆军军医大学第一附属医院 Visualization method for quantifying glioma invasiveness based on nuclear density function
WO2022098729A1 (en) * 2020-11-04 2022-05-12 Second Sight Data Discovery, Llc Technologies for unsupervised data classification with topological methods
KR102458372B1 (en) * 2020-11-30 2022-10-26 한국전자기술연구원 User occupancy time prediction system and method in urban buildings based on Big data for energy saving cooling and heating automatic control
US11734242B1 (en) * 2021-03-31 2023-08-22 Amazon Technologies, Inc. Architecture for resolution of inconsistent item identifiers in a global catalog
CN113899971B (en) * 2021-09-30 2023-11-14 广东电网有限责任公司广州供电局 Transformer abnormal condition discrimination method based on density similarity sparse clustering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5671294A (en) * 1994-09-15 1997-09-23 The United States Of America As Represented By The Secretary Of The Navy System and method for incorporating segmentation boundaries into the calculation of fractal dimension features for texture discrimination
US20030147558A1 (en) * 2002-02-07 2003-08-07 Loui Alexander C. Method for image region classification using unsupervised and supervised learning
US20060217925A1 (en) * 2005-03-23 2006-09-28 Taron Maxime G Methods for entity identification
US20070003137A1 (en) * 2005-04-19 2007-01-04 Daniel Cremers Efficient kernel density estimation of shape and intensity priors for level set segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7202791B2 (en) 2001-09-27 2007-04-10 Koninklijke Philips N.V. Method and apparatus for modeling behavior using a probability distrubution function

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5671294A (en) * 1994-09-15 1997-09-23 The United States Of America As Represented By The Secretary Of The Navy System and method for incorporating segmentation boundaries into the calculation of fractal dimension features for texture discrimination
US20030147558A1 (en) * 2002-02-07 2003-08-07 Loui Alexander C. Method for image region classification using unsupervised and supervised learning
US20060217925A1 (en) * 2005-03-23 2006-09-28 Taron Maxime G Methods for entity identification
US20070003137A1 (en) * 2005-04-19 2007-01-04 Daniel Cremers Efficient kernel density estimation of shape and intensity priors for level set segmentation

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232355A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data using eigenanalysis
US20110129159A1 (en) * 2009-11-30 2011-06-02 Xerox Corporation Content based image selection for automatic photo album generation
US20110200249A1 (en) * 2010-02-17 2011-08-18 Harris Corporation Surface detection in images based on spatial data
US20140201339A1 (en) * 2011-05-27 2014-07-17 Telefonaktiebolaget L M Ericsson (Publ) Method of conditioning communication network data relating to a distribution of network entities across a space
CN103729539A (en) * 2012-10-12 2014-04-16 国际商业机器公司 Method and system for detecting and describing visible features on visualization
US20140104309A1 (en) * 2012-10-12 2014-04-17 International Business Machines Corporation Detecting and Describing Visible Features on a Visualization
US9311900B2 (en) * 2012-10-12 2016-04-12 International Business Machines Corporation Detecting and describing visible features on a visualization
US9311899B2 (en) * 2012-10-12 2016-04-12 International Business Machines Corporation Detecting and describing visible features on a visualization
US10223818B2 (en) 2012-10-12 2019-03-05 International Business Machines Corporation Detecting and describing visible features on a visualization
CN106339416A (en) * 2016-08-15 2017-01-18 常熟理工学院 Grid-based data clustering method for fast researching density peaks
US11210348B2 (en) * 2018-01-13 2021-12-28 Huizhou University Data clustering method and apparatus based on k-nearest neighbor and computer readable storage medium
WO2020113363A1 (en) * 2018-12-03 2020-06-11 Siemens Mobility GmbH Method and apparatus for classifying data

Also Published As

Publication number Publication date
US7542953B1 (en) 2009-06-02
US20090132568A1 (en) 2009-05-21
US7542954B1 (en) 2009-06-02
US7412429B1 (en) 2008-08-12

Similar Documents

Publication Publication Date Title
US7542953B1 (en) Data classification by kernel density shape interpolation of clusters
Jia et al. Bagging-based spectral clustering ensemble selection
US20160275415A1 (en) Reader learning method and device, data recognition method and device
Bicego et al. Properties of the Box–Cox transformation for pattern classification
US20150324663A1 (en) Image congealing via efficient feature selection
Celeux et al. Variable selection in model-based clustering and discriminant analysis with a regularization approach
Reza et al. ICA and PCA integrated feature extraction for classification
Kini et al. Large margin mixture of AR models for time series classification
US7974476B2 (en) Flexible MQDF classifier model compression
Masuyama et al. A kernel Bayesian adaptive resonance theory with a topological structure
Masoumi et al. Shape classification using spectral graph wavelets
CN115331752B (en) Method capable of adaptively predicting quartz forming environment
Alalyan et al. Model-based hierarchical clustering for categorical data
Kosiorowski Dilemmas of robust analysis of economic data streams
Balafar et al. Active learning for constrained document clustering with uncertainty region
Channoufi et al. Spatially constrained mixture model with feature selection for image and video segmentation
US20160217386A1 (en) Computer implemented classification system and method
Yellamraju et al. Benchmarks for image classification and other high-dimensional pattern recognition problems
Mishra et al. Performance analysis of dimensionality reduction techniques: a comprehensive review
Singh et al. Feature selection using rough set for improving the performance of the supervised learner
Li et al. Strangeness based feature selection for part based recognition
CN111401783A (en) Power system operation data integration feature selection method
Krömer et al. Cluster analysis of data with reduced dimensionality: an empirical study
Kim et al. Rank-based discriminative feature learning for motor imagery classification in eeg signals
Soviany et al. Feature Engineering

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SAP AG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:028540/0522

Effective date: 20120629

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: SAP SE, GERMANY

Free format text: CHANGE OF NAME;ASSIGNOR:SAP AG;REEL/FRAME:033625/0334

Effective date: 20140707

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12