US20060204097A1 - Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers - Google Patents

Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers Download PDF

Info

Publication number
US20060204097A1
US20060204097A1 US11/072,880 US7288005A US2006204097A1 US 20060204097 A1 US20060204097 A1 US 20060204097A1 US 7288005 A US7288005 A US 7288005A US 2006204097 A1 US2006204097 A1 US 2006204097A1
Authority
US
United States
Prior art keywords
layers
cells
layer
recognition
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/072,880
Inventor
Klaus Bach
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/072,880 priority Critical patent/US20060204097A1/en
Priority to PCT/US2006/007502 priority patent/WO2006096478A2/en
Priority to EP06736767.2A priority patent/EP1866841A4/en
Publication of US20060204097A1 publication Critical patent/US20060204097A1/en
Priority to US11/879,001 priority patent/US8059890B2/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]

Definitions

  • the invention relates to a method and system for implementing successive multi-layered feature recognition in N-dimensional space in which recognition cells are dynamically generated to accommodate the input data and are adapted during the recognition process, wherein the recognition cells are structured into groups which have specific recognition features assigned to them.
  • Pattern and object recognition by means of successive computation steps generally begins with the loading of an input dataset into a pre-defined set of input variables or cells which constitute the lowest recognition layer.
  • each cell in higher recognition layers generates a response based on the values or responses of a selected subset of cells in lower layers (receptive field).
  • the number of layers used, the sizes of the receptive fields, and the rule used by each cell to compute its response vary depending on the type of information to be recognized, that is, the complexity and number of the patterns, and the intermediate features that must be recognized to successfully identify the pattern. Sufficiently fine-grained intermediate features, overlapping receptive fields, and strongly converging data paths enable distortion-invariant and position-tolerant recognition.
  • the structure and dimension of the recognition layers are generally fixed during the recognition process, requiring that each layer contain enough recognition cells to fill the N dimensions of the recognition space in the required resolution. For cases where N>2 the resulting large number of cells makes a computation of the cell responses unfeasible. The process becomes inefficient particularly where the input data is sparsely distributed throughout a large input space.
  • the structure of a neural recognition network is dynamically generated and adapted to recognize an object.
  • the layers of the network are capable of recognizing key features of the input data by using evaluation rules to establish a hierarchical structure that can adapt to data position and orientation, varying data densities, geometrical scaling, and faulty or missing data. Based on successive hierarchical feature recognition and synthesis, sufficient relevant recognition cells are generated to enable data processing and information propagation through the recognition network without generating or computing unnecessary irrelevant cells.
  • a network hierarchy is defined and constructed comprising a succession of recognition layers which each contain a collection of nodes or cells.
  • the number of cells in a layer varies during processing, and each layer is initially equipped with zero cells.
  • Each layer recognizes a specific group of key features, and the cells in the layer are used to represent the presence and characteristics of the features.
  • the layers are interlinked by ownership-type relationships in which a cell in a given layer is said to own a group of cells in a subordinate layer, and this subordinate group of cells is said to constitute the receptive field of the superordinate cell.
  • This owner-subordinate relationship is repeated between each pair of successive layers from the lowest input layer to the final uppermost recognition layer.
  • This final layer represents the answer to the overall recognition problem in that its cells recognize the key features that uniquely classify the pattern imposed on the input layer.
  • cells in the various hierarchical layers may be created and destroyed, linked and unlinked with owners, and assigned to, and removed from, receptive fields in an iterative convergent process that can implement neural network recognition techniques that result in a final collection of cells that adequately represents the object to be recognized in the various hierarchical layers.
  • This final structure is both a hierarchical map of the input data as well as a network capable of recognizing the key features of the input data.
  • the computation process is initiated by transferring the input data into the lowest hierarchical layer (the input or zeroth layer).
  • Typical input data may consist of simply a list of coordinates and a physical property that has been measured at that coordinate. In this case, a single input cell may be generated for each input data point, fully representing the input data with zero information loss.
  • the next superordinate layer is constructed by means of an appropriate rule or algorithm for grouping input layer cells.
  • the input layer cells are divided into a number of groups, a new layer 1 cell is generated for each group, the receptive field pointers of the layer 1 cell are set to point to the cells in the group, and the cells in the group become owned by the new layer 1 cell.
  • This initial grouping into receptive fields is repeated as each new layer is constructed, and, although the iterative solution process converges despite ill-defined initial groupings, a well-planned initial grouping speeds convergence.
  • the iterative solution process may begin, and may continue throughout and after the successive construction of the third to last layers.
  • the iterative process may implement whatever neural network recognition techniques are appropriate to the key features of the relevant layer, generally simple analytical formulae in the lower layers, and more complex pattern recognition algorithms in the higher layers, but always driving toward a stable state of mutual inter-layer reinforcement.
  • receptive field sizes and recognition parameters are adjusted, cells may reselect the receptive field to which they belong based on new information from higher layers, and cells may even find there is no existing receptive field they wish to join.
  • Superordinate cells are created as required to overtake ownership of orphaned cells, and are destroyed when their receptive fields have atrophied.
  • a typical problem well suited for this network algorithm is 3-dimensional object recognition using as input data a set of 3-dimensional coordinate points representing random points on the surface of the object to be recognized.
  • Such a dataset could be generated, for example by a laser scanning device capable of detecting and outputting the 3-dimensional coordinates of its scan points.
  • Scan data generated in an industrial plant setting for the purposes of documentation, computation, and analysis has a usefulness that is directly related to the degree or complexity of recognition, or intelligence. For example, recognizing the mere presence of a surface is accomplished by a laser scan, but is not very useful; to recognize that surface as a part of a cylinder is better, but to recognize it as part of a pipe of a certain size with specific end connections allows the generation of useful CAE models of existing plants.
  • the input data consists of a 3-dimensional laser scan of a section of an industrial plant.
  • the current implementation uses 5 layers abstracted as follows (that is, what a cell from each layer represents):
  • the input layer of the network is filled with input data such that one input layer cell is generated for each scanned laser point and assigned the spatial coordinates of the scan point.
  • a cell in this layer can only represent a spatial point, therefore its polarization vector has only one component which always has a value of 1.
  • the next step constructs Layer 1 by finding an unowned layer 0 cell, generating a new layer 1 cell as its owner, and searching in the neighborhood of the previously unowned layer 0 cell for other unowned layer 0 cells that can be added to the receptive field of the new layer 1 cell. This process is repeated until there are no unowned layer 0 cells.
  • a layer 1 cell in this implementation can only represent a small surface patch locally approximated as flat, and therefore also has a polarization vector that has only one component which always has a value of 1.
  • each layer 1 cell now has a group of layer 0 cells which it owns and which form its receptive field. From these cells, the layer 1 cell can evaluate itself and recognize the features it is assigned to recognize. Since the cells of layer 1 are intended to represent flat surface patches or panels, they can be appropriately represented by the coordinates of their centroid and by a 3-dimensional vector representing the surface normal at the centroid, whereby these values can be computed by simple analytical formulae applied to the layer 0 cells in the receptive field. Once the layer 1 cells have been evaluated, the rule can be applied which judges the contribution of a layer 0 cell to the recognition process of the layer 1 cell to which it belongs.
  • a layer 0 cell Since a layer 1 cell represents a flat surface panel, a layer 0 cell contributes poorly or detrimentally to the recognition process of the layer 1 cell if it does not lie in or near the plane represented by the layer 1 cell. In this case, the layer 0 cell is removed from the receptive field of the layer 1 cell and allowed to search for an other more appropriate layer 1 cell to join. If that search is unsuccessful, the orphaned layer 0 cell initiates a new layer 1 cell to own it. During successive iterations, that newly generated receptive field, which at first contains a single layer 0 cell, may grow by the acquisition of other layer 0 cells that are rejected from nearby receptive fields.
  • each layer 0 cell is owned by a layer 1 cell, and each layer 1 cell has a well-defined receptive field with members that support the decision of the recognition process carried out by the layer 1 cell.
  • all input data has been processed without loss of information, in that each input data point has generated a layer 0 cell, and each layer 0 cell has been given the opportunity to represent the surface to which it belongs in the layer 1 surface panels.
  • the layer 2 cells are constructed by a similar process: for each unowned layer 1 cell, a new layer 2 cell is created and allowed to search for further receptive field members.
  • the layer 2 cells can represent various types of surface elements, including curved surfaces, they have additional properties or variables to represent the features unique to this recognition layer (strength of curvature, that is, radius of curvature, and curvature orientation) and a two-component polarization vector to indicate the type of surface represented (flat or curved). These properties can be computed from the cells of the receptive field, and the layer 2 cells can be optimized by an iterative process as was done for the layer 1 cells.
  • the conditions for membership in the receptive field of a layer 2 cell differ, however, from the conditions for membership in the receptive field of a layer 1 cell. Since layer 2 cells may represent curved surfaces, they must be more lenient in their selection of members, also allowing layer 1 cells to join that have their centroids somewhat outside the average plane represented by the layer 2 cell. On the other hand, layer 2 cells may impose new membership conditions on layer 1 cells, for example requiring that the receptive field of a layer 1 cell lie adjacent to the receptive field of another layer 1 cell that is already a member of the layer 2 cell's receptive field before allowing membership.
  • Top-down information flow may contribute to the recognition process in the following way: once a radius of curvature has been computed for a layer 2 cell, it becomes clear that that cell represents a curved surface segment that is likely to be part of a cylinder or sphere. For the purpose of recognizing that cylinder or sphere, it is not necessary and makes no sense to include information from points that are a distance of more that twice the computed radius of curvature away from the computed center. Thus it is possible for a layer 2 cell to restrict the size of the receptive fields of the layer 1 cells it owns to a certain value, and to allow the layer 1 cells to reevaluate themselves based on this new information.
  • Layer 3 cells are then generated and optimized analogous to layer 1 and 2 cells.
  • Layer 3 cells are built up from the features of layer 2 cells, and represent cylindrical segments, edges, boxes, and whatever other geometric primitives are necessary to represent the objects found in the input data.
  • Layer 3 cells again contain properties or variables not found in layer 2 cells to assess the key features detected, and a four-component polarization vector to specify the type of object the layer 3 cell represents.
  • the polarization vector of a cell indicates the degree to which the cell belongs to one of the recognition categories and serves to assess the compatibility between cells of differing layers.
  • the polarization vector tends toward one of the following states:
  • the polarization vector is set to [1 1 1 1] meaning that the cell is simultaneously a flat plane, an edge, a cylinder, and a sphere.
  • the polarization vector is refined and asymptotically approaches one of the states listed above.
  • the resulting value indicates whether the layer 2 cell is a suitable receptive field cell for the layer 3 cell.
  • layer 4 cells represent the answer to the recognition problem in that they are abstractions of elements which are a more intelligent and more complete representation of the data than the original layer 0 data points.

Abstract

In a method and a system for the implementation of multi-layered network object recognition in N-dimensional space, the structure of a neural recognition network is dynamically generated and adapted to recognize an object. The layers of the network are capable of recognizing key features of the input data by using evaluation rules to establish a hierarchical structure that can adapt to data position and orientation, varying data densities, geometrical scaling, and faulty or missing data.

Description

    BACKGROUND OF THE INVENTION
  • The invention relates to a method and system for implementing successive multi-layered feature recognition in N-dimensional space in which recognition cells are dynamically generated to accommodate the input data and are adapted during the recognition process, wherein the recognition cells are structured into groups which have specific recognition features assigned to them.
  • Pattern and object recognition by means of successive computation steps generally begins with the loading of an input dataset into a pre-defined set of input variables or cells which constitute the lowest recognition layer. During the recognition process, each cell in higher recognition layers generates a response based on the values or responses of a selected subset of cells in lower layers (receptive field). The number of layers used, the sizes of the receptive fields, and the rule used by each cell to compute its response vary depending on the type of information to be recognized, that is, the complexity and number of the patterns, and the intermediate features that must be recognized to successfully identify the pattern. Sufficiently fine-grained intermediate features, overlapping receptive fields, and strongly converging data paths enable distortion-invariant and position-tolerant recognition.
  • The structure and dimension of the recognition layers are generally fixed during the recognition process, requiring that each layer contain enough recognition cells to fill the N dimensions of the recognition space in the required resolution. For cases where N>2 the resulting large number of cells makes a computation of the cell responses unfeasible. The process becomes inefficient particularly where the input data is sparsely distributed throughout a large input space.
  • It is the object of the present invention to provide a method by which the responses of the relevant cells in the higher recognition layers can be efficiently calculated without performing trivial calculations that contribute nothing to the solution.
  • SUMMARY OF THE INVENTION
  • In a method and a system for the implementation of multi-layered network object recognition in N-dimensional space, the structure of a neural recognition network is dynamically generated and adapted to recognize an object. The layers of the network are capable of recognizing key features of the input data by using evaluation rules to establish a hierarchical structure that can adapt to data position and orientation, varying data densities, geometrical scaling, and faulty or missing data. Based on successive hierarchical feature recognition and synthesis, sufficient relevant recognition cells are generated to enable data processing and information propagation through the recognition network without generating or computing unnecessary irrelevant cells.
  • Before any data is processed, a network hierarchy is defined and constructed comprising a succession of recognition layers which each contain a collection of nodes or cells. The number of cells in a layer varies during processing, and each layer is initially equipped with zero cells. Each layer recognizes a specific group of key features, and the cells in the layer are used to represent the presence and characteristics of the features. The layers are interlinked by ownership-type relationships in which a cell in a given layer is said to own a group of cells in a subordinate layer, and this subordinate group of cells is said to constitute the receptive field of the superordinate cell. This owner-subordinate relationship is repeated between each pair of successive layers from the lowest input layer to the final uppermost recognition layer. This final layer represents the answer to the overall recognition problem in that its cells recognize the key features that uniquely classify the pattern imposed on the input layer.
  • All layers have the following properties:
      • 1. Each layer represents an abstraction level of the object to be recognized, with the complexity of the abstraction being greater in the higher layers.
      • 2. Each layer is equipped with a set of key features that characterize an aspect or property of the object to be recognized, with said features being computable by examining the properties and responses of a subset of cells in the subordinate layer.
      • 3. Each layer is equipped with a rule or algorithm for determining whether a subordinate layer cell contributes positively or negatively to the recognition process carried out in said layer, specifically, a means of determining whether the inclusion of a given subordinate cell in the receptive field of a given superordinate cell is advantageous to the recognition function of the superordinate cell.
  • All cells have the further following properties:
      • 1. A 1-dimensional response vector (fuzzy polarization vector) which indicates the cell's recognition of the features which are key to the layer containing the cell.
      • 2. A collection of pointers representing references to a subset of cells in the layer subordinate to the layer containing said cell, collectively called receptive field, and a corresponding collection of weights.
      • 3. A single pointer representing a reference to a cell in the layer superordinate to the layer containing said cell, called owning cell.
      • 4. Variables containing computed geometric information such as unit normal vector, centroid, and orientation direction.
  • During the computation process, cells in the various hierarchical layers may be created and destroyed, linked and unlinked with owners, and assigned to, and removed from, receptive fields in an iterative convergent process that can implement neural network recognition techniques that result in a final collection of cells that adequately represents the object to be recognized in the various hierarchical layers. This final structure is both a hierarchical map of the input data as well as a network capable of recognizing the key features of the input data. Information flows between neighboring layers in both bottom-up and top-down directions during the convergence process: superordinate cells extract features by evaluating the properties of the subordinate cells in their receptive fields, and subordinate cells base their membership decisions, receptive field sizes, and evaluation parameters on information from the superordinate layer cells. Recognition occurs when the top-down signals and bottom-up signals are sufficiently mutually fortifying to establish a persistent stable activation level of all involved cells; at this point the iterative process has converged to a solution.
  • Overall solution convergence is driven by cells being grouped into receptive fields if they have converging interests, that is, if they represent the same thing, as can be seen from their recognition vectors, which must be converging with the owner.
  • The computation process is initiated by transferring the input data into the lowest hierarchical layer (the input or zeroth layer). Typical input data may consist of simply a list of coordinates and a physical property that has been measured at that coordinate. In this case, a single input cell may be generated for each input data point, fully representing the input data with zero information loss.
  • The next superordinate layer is constructed by means of an appropriate rule or algorithm for grouping input layer cells. In principle, the input layer cells are divided into a number of groups, a new layer 1 cell is generated for each group, the receptive field pointers of the layer 1 cell are set to point to the cells in the group, and the cells in the group become owned by the new layer 1 cell. This initial grouping into receptive fields is repeated as each new layer is constructed, and, although the iterative solution process converges despite ill-defined initial groupings, a well-planned initial grouping speeds convergence.
  • When the first two layers have been constructed, the iterative solution process may begin, and may continue throughout and after the successive construction of the third to last layers. The iterative process may implement whatever neural network recognition techniques are appropriate to the key features of the relevant layer, generally simple analytical formulae in the lower layers, and more complex pattern recognition algorithms in the higher layers, but always driving toward a stable state of mutual inter-layer reinforcement. In addition, during the iterative solution process, receptive field sizes and recognition parameters are adjusted, cells may reselect the receptive field to which they belong based on new information from higher layers, and cells may even find there is no existing receptive field they wish to join. Superordinate cells are created as required to overtake ownership of orphaned cells, and are destroyed when their receptive fields have atrophied.
  • The flexible dynamic structure has the following advantages:
      • Cells are generated only where there is data to recognize
      • Receptive field sizes are dynamic and membership criteria are based on rules, allowing receptive fields to adapt to varying data densities and geometric scaling of key features (the network is scale invariant)
      • Since, through the process of mutual reinforcement, the response of superordinate cells is activated by the presence of merely sufficient supporting subordinate cells, not all conceivable supporting cells, the recognition process succeeds despite faulty, noisy. or missing data, or even despite errors in the recognition of a minority of cells in lower layers (the network is fault-tolerant)
      • Since recognition cells are constructed where the data is found, a spatial translation of input data has no effect whatever on the ability of the network to recognize the assigned patterns (the network's recognition is position-invariant)
      • Since the network is fault-tolerant at each recognition layer, the network is capable of compensating for each way in which the data may differ from the recognition ideal (the network's recognition is distortion-invariant).
    DESCRIPTION OF A PREFERRED EMBODIMENT
  • A typical problem well suited for this network algorithm is 3-dimensional object recognition using as input data a set of 3-dimensional coordinate points representing random points on the surface of the object to be recognized. Such a dataset could be generated, for example by a laser scanning device capable of detecting and outputting the 3-dimensional coordinates of its scan points. Scan data generated in an industrial plant setting for the purposes of documentation, computation, and analysis has a usefulness that is directly related to the degree or complexity of recognition, or intelligence. For example, recognizing the mere presence of a surface is accomplished by a laser scan, but is not very useful; to recognize that surface as a part of a cylinder is better, but to recognize it as part of a pipe of a certain size with specific end connections allows the generation of useful CAE models of existing plants. In the current implementation, the input data consists of a 3-dimensional laser scan of a section of an industrial plant.
  • The current implementation uses 5 layers abstracted as follows (that is, what a cell from each layer represents):
  • Layer 0: 3-D Point
  • Layer 1: Nearly flat surface patch
  • Layer 2: Curved or flat surface fragment
  • Layer 3: Geometric primitive (cylinder, box, edge, sphere)
  • Layer 4: Final 3-d object
  • The input layer of the network is filled with input data such that one input layer cell is generated for each scanned laser point and assigned the spatial coordinates of the scan point. A cell in this layer can only represent a spatial point, therefore its polarization vector has only one component which always has a value of 1.
  • The next step constructs Layer 1 by finding an unowned layer 0 cell, generating a new layer 1 cell as its owner, and searching in the neighborhood of the previously unowned layer 0 cell for other unowned layer 0 cells that can be added to the receptive field of the new layer 1 cell. This process is repeated until there are no unowned layer 0 cells. A layer 1 cell in this implementation can only represent a small surface patch locally approximated as flat, and therefore also has a polarization vector that has only one component which always has a value of 1.
  • With layer 1 constructed, it is now possible to perform the first optimization. First, each layer 1 cell now has a group of layer 0 cells which it owns and which form its receptive field. From these cells, the layer 1 cell can evaluate itself and recognize the features it is assigned to recognize. Since the cells of layer 1 are intended to represent flat surface patches or panels, they can be appropriately represented by the coordinates of their centroid and by a 3-dimensional vector representing the surface normal at the centroid, whereby these values can be computed by simple analytical formulae applied to the layer 0 cells in the receptive field. Once the layer 1 cells have been evaluated, the rule can be applied which judges the contribution of a layer 0 cell to the recognition process of the layer 1 cell to which it belongs. Since a layer 1 cell represents a flat surface panel, a layer 0 cell contributes poorly or detrimentally to the recognition process of the layer 1 cell if it does not lie in or near the plane represented by the layer 1 cell. In this case, the layer 0 cell is removed from the receptive field of the layer 1 cell and allowed to search for an other more appropriate layer 1 cell to join. If that search is unsuccessful, the orphaned layer 0 cell initiates a new layer 1 cell to own it. During successive iterations, that newly generated receptive field, which at first contains a single layer 0 cell, may grow by the acquisition of other layer 0 cells that are rejected from nearby receptive fields. The final state of layers 0 and 1 is such that each layer 0 cell is owned by a layer 1 cell, and each layer 1 cell has a well-defined receptive field with members that support the decision of the recognition process carried out by the layer 1 cell. In addition, all input data has been processed without loss of information, in that each input data point has generated a layer 0 cell, and each layer 0 cell has been given the opportunity to represent the surface to which it belongs in the layer 1 surface panels.
  • Next the layer 2 cells are constructed by a similar process: for each unowned layer 1 cell, a new layer 2 cell is created and allowed to search for further receptive field members. Since the layer 2 cells can represent various types of surface elements, including curved surfaces, they have additional properties or variables to represent the features unique to this recognition layer (strength of curvature, that is, radius of curvature, and curvature orientation) and a two-component polarization vector to indicate the type of surface represented (flat or curved). These properties can be computed from the cells of the receptive field, and the layer 2 cells can be optimized by an iterative process as was done for the layer 1 cells. The conditions for membership in the receptive field of a layer 2 cell differ, however, from the conditions for membership in the receptive field of a layer 1 cell. Since layer 2 cells may represent curved surfaces, they must be more lenient in their selection of members, also allowing layer 1 cells to join that have their centroids somewhat outside the average plane represented by the layer 2 cell. On the other hand, layer 2 cells may impose new membership conditions on layer 1 cells, for example requiring that the receptive field of a layer 1 cell lie adjacent to the receptive field of another layer 1 cell that is already a member of the layer 2 cell's receptive field before allowing membership.
  • Top-down information flow may contribute to the recognition process in the following way: once a radius of curvature has been computed for a layer 2 cell, it becomes clear that that cell represents a curved surface segment that is likely to be part of a cylinder or sphere. For the purpose of recognizing that cylinder or sphere, it is not necessary and makes no sense to include information from points that are a distance of more that twice the computed radius of curvature away from the computed center. Thus it is possible for a layer 2 cell to restrict the size of the receptive fields of the layer 1 cells it owns to a certain value, and to allow the layer 1 cells to reevaluate themselves based on this new information.
  • Layer 3 cells are then generated and optimized analogous to layer 1 and 2 cells. Layer 3 cells are built up from the features of layer 2 cells, and represent cylindrical segments, edges, boxes, and whatever other geometric primitives are necessary to represent the objects found in the input data. Layer 3 cells again contain properties or variables not found in layer 2 cells to assess the key features detected, and a four-component polarization vector to specify the type of object the layer 3 cell represents.
  • The polarization vector of a cell indicates the degree to which the cell belongs to one of the recognition categories and serves to assess the compatibility between cells of differing layers. In the example of the layer 3 cells mentioned above, the polarization vector tends toward one of the following states:
  • [1 0 0 0]≡Plane
  • [0 1 0 0]≡Edge
  • [0 0 1 0]≡Cylinder
  • [0 0 0 1]≡Sphere
  • During the initial construction of the layer 3 cells, the polarization vector is set to [1 1 1 1] meaning that the cell is simultaneously a flat plane, an edge, a cylinder, and a sphere. As the cell acquires subordinate cells and reevaluates itself, the polarization vector is refined and asymptotically approaches one of the states listed above.
  • The polarization vector is used in conjunction with a compatibility matrix to assess the utility or feasibility of including a given layer 2 cell in the receptive field of a given layer 3 cell. This is carried out by multiplying the polarization vector of the layer 3 cell with the compatibility matrix and with the polarization vector of the layer 2 cell: Compatibility = P 3 · X 32 · P 2 = [ ρ PLANE ρ EDGE ρ CYLI ρ SPHERE ] * [ 1 0 1 0 0 1 0 1 ] * [ ρ FLAT ρ CURVED ]
  • The resulting value indicates whether the layer 2 cell is a suitable receptive field cell for the layer 3 cell.
  • Finally, layer 4 cells represent the answer to the recognition problem in that they are abstractions of elements which are a more intelligent and more complete representation of the data than the original layer 0 data points.

Claims (8)

1. A method for implementing object recognition in N-dimensional space by means of a network having multiple layers, said method comprising the following:
The structure of the layers is hierarchical in that the layers are ordered and layers that are higher in the hierarchical order have cells connected by, links representing ownership of cells contained by layers that are lower in the hierarchical order;
The layers are assigned certain key features which the cells in the respective layers are capable of recognizing and representing;
The layers are dynamic in size and in structure in that member cells may be created or destroyed and links between cells of adjoining layers may be created or destroyed so as to adapt to input data to be recognized;
The layers are equipped with a rule for determining whether cells from subordinate layers should be included in receptive fields of cells from higher layers;
The layer cells are equipped with a polarization vector which serves to determine the compatibility of said cells with cells of neighboring layers;
The network is adapted through an iterative process in which cell ownership is modified and cells are created or destroyed to converge the state of the cells to a final persistent stable state of mutual reinforcement which represents a solution to the recognition problem.
2. A method according to claim 1, wherein links representing ownership relationships between cells of differing layers are created not just between adjacent layers but also between non-adjacent layers to assist in the recognition process.
3. A method according to claim 1, wherein the arrangement of the layers is not linear, but is itself a branched network.
4. A method according to claim 1, wherein the implementation of various features is selectively distributed among multiple hardware or software processing systems for improved performance.
5. A system for implementing object recognition in N-dimensional space by means of a network having multiple layers, said system comprising the following;
The structure of the layers is hierarchical in that the layers are ordered and layers that are higher in the hierarchical order have cells connected by links representing ownership of cells contained by layers that are lower in the hierarchical order;
The layers are assigned certain key features which the cells in the respective layers are capable of recognizing and representing;
The layers are dynamic in size and in structure in that member cells may be created or destroyed and links between cells of adjoining layers may be created or destroyed so as to adapt to input data to be recognized;
The layers are equipped with a rule for determining whether cells from subordinate layers should be included in receptive fields of cells from higher layers;
The layer cells are equipped with a polarization vector which serves to determine the compatibility of said cells with cells of neighboring layers;
The network is adapted through an iterative process in which cell ownership is modified and cells are created or destroyed to converge the state of the cells to a final persistent stable state of mutual reinforcement which represents a solution to the recognition problem.
6. A system according to claim 5, wherein links representing ownership relationships between cells of differing layers are created not just between adjacent layers but also between non-adjacent layers to assist in the recognition process.
7. A system according to claim 5, wherein the arrangement of the layers is not linear, but is itself a branched network.
8. A system according to claim 5, wherein the implementation of various features is selectively distributed among multiple hard- or software processing systems for improved performance.
US11/072,880 2005-03-04 2005-03-04 Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers Abandoned US20060204097A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US11/072,880 US20060204097A1 (en) 2005-03-04 2005-03-04 Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers
PCT/US2006/007502 WO2006096478A2 (en) 2005-03-04 2006-03-02 Method and system for implementing n-dimensional object recognition using dynamic adaptive recognition layers
EP06736767.2A EP1866841A4 (en) 2005-03-04 2006-03-02 Method and system for implementing n-dimensional object recognition using dynamic adaptive recognition layers
US11/879,001 US8059890B2 (en) 2005-03-04 2007-07-13 Method for implementing n-dimensional object recognition using dynamic adaptive recognition layers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/072,880 US20060204097A1 (en) 2005-03-04 2005-03-04 Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/879,001 Continuation-In-Part US8059890B2 (en) 2005-03-04 2007-07-13 Method for implementing n-dimensional object recognition using dynamic adaptive recognition layers

Publications (1)

Publication Number Publication Date
US20060204097A1 true US20060204097A1 (en) 2006-09-14

Family

ID=36953864

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/072,880 Abandoned US20060204097A1 (en) 2005-03-04 2005-03-04 Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers
US11/879,001 Expired - Fee Related US8059890B2 (en) 2005-03-04 2007-07-13 Method for implementing n-dimensional object recognition using dynamic adaptive recognition layers

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/879,001 Expired - Fee Related US8059890B2 (en) 2005-03-04 2007-07-13 Method for implementing n-dimensional object recognition using dynamic adaptive recognition layers

Country Status (3)

Country Link
US (2) US20060204097A1 (en)
EP (1) EP1866841A4 (en)
WO (1) WO2006096478A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142147A1 (en) * 2017-06-27 2021-05-13 Semiconductor Energy Laboratory Co., Ltd. Portable information terminal and problem solving system

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8738554B2 (en) 2011-09-16 2014-05-27 International Business Machines Corporation Event-driven universal neural network circuit
US8874498B2 (en) 2011-09-16 2014-10-28 International Business Machines Corporation Unsupervised, supervised, and reinforced learning via spiking computation
US8799199B2 (en) 2011-12-14 2014-08-05 International Business Machines Corporation Universal, online learning in multi-modal perception-action semilattices
US8626684B2 (en) 2011-12-14 2014-01-07 International Business Machines Corporation Multi-modal neural network for universal, online learning
KR101388749B1 (en) * 2013-10-25 2014-04-29 중앙대학교 산학협력단 System and method for 3d reconstruction of as-built industrial model from 3d data
KR101364375B1 (en) 2013-10-25 2014-02-18 중앙대학교 산학협력단 System and method for extracting a specific object from 3d data
US11664125B2 (en) * 2016-05-12 2023-05-30 Siemens Healthcare Gmbh System and method for deep learning based cardiac electrophysiology model personalization
US10895954B2 (en) * 2017-06-02 2021-01-19 Apple Inc. Providing a graphical canvas for handwritten input
CN108509949B (en) * 2018-02-05 2020-05-15 杭州电子科技大学 Target detection method based on attention map
CN108990833A (en) * 2018-09-11 2018-12-14 河南科技大学 A kind of animal movement behavior method of discrimination and device based on location information
CN112016638B (en) * 2020-10-26 2021-04-06 广东博智林机器人有限公司 Method, device and equipment for identifying steel bar cluster and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4355301A (en) * 1980-05-27 1982-10-19 Sumitomo Electric Industries, Ltd. Optical character reading system
US4975975A (en) * 1988-05-26 1990-12-04 Gtx Corporation Hierarchical parametric apparatus and method for recognizing drawn characters
US5295198A (en) * 1988-10-14 1994-03-15 Harris Corporation Pattern identification by analysis of digital words

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4355301A (en) * 1980-05-27 1982-10-19 Sumitomo Electric Industries, Ltd. Optical character reading system
US4975975A (en) * 1988-05-26 1990-12-04 Gtx Corporation Hierarchical parametric apparatus and method for recognizing drawn characters
US5295198A (en) * 1988-10-14 1994-03-15 Harris Corporation Pattern identification by analysis of digital words

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210142147A1 (en) * 2017-06-27 2021-05-13 Semiconductor Energy Laboratory Co., Ltd. Portable information terminal and problem solving system

Also Published As

Publication number Publication date
EP1866841A2 (en) 2007-12-19
EP1866841A4 (en) 2013-12-18
WO2006096478A3 (en) 2007-05-24
US8059890B2 (en) 2011-11-15
WO2006096478A2 (en) 2006-09-14
US20070258649A1 (en) 2007-11-08

Similar Documents

Publication Publication Date Title
US20060204097A1 (en) Method and system for implementing N-dimensional object recognition using dynamic adaptive recognition layers
Kordestani et al. A novel framework for improving multi-population algorithms for dynamic optimization problems: A scheduling approach
Zhou et al. Efficient minimum spanning tree construction without Delaunay triangulation
Kerschke et al. Cell mapping techniques for exploratory landscape analysis
CN105376791B (en) Detect restorative procedure in dynamic pickup network coverage cavity based on sub-Voronoi drawing area method
Zheng et al. A geometric approach to automated fixture layout design
Dutta et al. Distributed configuration formation with modular robots using (sub) graph isomorphism-based approach
Saryazdi et al. NPC: Neighbors' progressive competition algorithm for classification of imbalanced data sets
Nie et al. A new directional simulation method for system reliability. Part II: application of neural networks
Ding et al. A contemporary study into the application of neural network techniques employed to automate CAD/CAM integration for die manufacture
CN107515960B (en) Feature modeling based topological optimization design method for circularly symmetric cylindrical support structure
Hamfelt et al. Beyond K-means: clusters identification for GIS
Khrissi et al. A feature selection approach based on archimedes’ optimization algorithm for optimal data classification
Fazelpour et al. A taxonomy for representing prismatic cellular materials
Eggink et al. Automated joining element design by predicting spot-weld locations using 3D convolutional neural networks
CN115356105A (en) Bearing fault diagnosis method and device, electronic equipment and readable storage medium
Chen et al. Efficient algorithms for the weighted k-center problem on a real line
Zhang et al. Exploring protein's optimal HP configurations by self-organizing mapping
Verstraete Dealing with rounding errors in geometry processing
Wolny et al. Improving population-based algorithms with fitness deterioration
Ding et al. Modularity maximization for community detection in networks using competitive Hopfield neural network
Quijano et al. Improving cooperative robot exploration using an hexagonal world representation
Liu et al. Particle swarm optimization based clustering: a comparison of different cluster validity indices
Liu et al. A hybrid information-based two-phase expansion algorithm for community detection with imbalanced scales
Ayvaz et al. A comparative study of evolutionary optimization techniques in dynamic environments

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION